Re: pax-jdbc-config connection pool configuration

2018-05-15 Thread Tim Ward
Another option for connection pooling would be to use the OSGi Transaction 
Control service from the R7 release. The resource providers all give implicit 
support for pooling, and the Aries implementation allows you to create them 
purely from configuration. 

The transaction control service also provides a more reliable mechanism for 
managing the transaction lifecycle than proxying/annotations. 

There’s a post about Transaction Control on the OSGi blog at 
https://blog.osgi.org/2018/05/osgi-r7-highlights-transaction-control.html and 
documentation at Apache Aries. The 1.0.0 release happened about two weeks ago 
and has been tested in Karaf.

Best Regards,

Tim

Sent from my iPhone

> On 15 May 2018, at 08:37, Christian Schneider  wrote:
> 
> The docs indeed show to use jdbc.pool.maxTotal but in the code I see that the 
> pool properties are filtered using "pool." 
> 
> See:
> https://github.com/ops4j/org.ops4j.pax.jdbc/blob/master/pax-jdbc-pool-dbcp2/src/main/java/org/ops4j/pax/jdbc/pool/dbcp2/impl/DbcpPooledDataSourceFactory.java
> 
> So can you try with pool.maxTotal? Still this is a bug - either the docs or 
> the code is wrong.
> 
> You can find the link to the issue tracker on the top level of the code base 
> in the README:
> https://github.com/ops4j/org.ops4j.pax.jdbc
> 
> Be aware though that you need to ask for a jira user on the ops4j list 
> (op...@googlegroups.com) first. 
> The self registration is switched off as there was lots of spam.
> 
> Christian
> 
> 
> 2018-05-14 21:47 GMT+02:00 Alex Soto :
>> Using Karaf  4.2.0, I am trying to configure connection pool using 
>> pax-jdbc-config  approach.  I installed features:
>> 
>> pax-jdbc-mariadb
>> pax-jdbc-config
>> pax-jdbc-pool-dbcp2
>> 
>> 
>> 
>> I dropped a org.ops4j.datasource-responder.cfg file in the etc directory:
>> 
>> osgi.jdbc.driver.name = mariadb
>> dataSourceName=responder
>> url = jdbc:mariadb://localhost:3306/responder
>> user=
>> password=
>> pool=dbcp2
>> xa=true
>> databaseName=responder
>> jdbc.pool.maxTotal=8
>> 
>> 
>> The last line causes this error:
>> 
>> cannot set properties [pool.maxTotal]
>> java.sql.SQLException: cannot set properties [pool.maxTotal]
>>  at 
>> org.ops4j.pax.jdbc.mariadb.impl.MariaDbDataSourceFactory.setProperties(MariaDbDataSourceFactory.java:70)
>>  ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.mariadb.impl.MariaDbDataSourceFactory.createDataSource(MariaDbDataSourceFactory.java:36)
>>  ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.config.impl.DataSourceRegistration.createDs(DataSourceRegistration.java:134)
>>  ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.config.impl.DataSourceRegistration.(DataSourceRegistration.java:80)
>>  ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.config.impl.DataSourceConfigManager.lambda$null$0(DataSourceConfigManager.java:81)
>>  ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.config.impl.ServiceTrackerHelper$1.addingService(ServiceTrackerHelper.java:131)
>>  ~[?:?]
>>  at 
>> org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:941)
>>  ~[?:?]
>>  at 
>> org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:870)
>>  ~[?:?]
>>  at 
>> org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:256) 
>> ~[?:?]
>>  at 
>> org.osgi.util.tracker.AbstractTracked.trackInitial(AbstractTracked.java:183) 
>> ~[?:?]
>>  at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:318) 
>> ~[?:?]
>>  at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:261) 
>> ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.config.impl.ServiceTrackerHelper.track(ServiceTrackerHelper.java:140)
>>  ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.config.impl.DataSourceConfigManager.lambda$null$1(DataSourceConfigManager.java:77)
>>  ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.config.impl.ServiceTrackerHelper.track(ServiceTrackerHelper.java:146)
>>  ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.config.impl.ServiceTrackerHelper.track(ServiceTrackerHelper.java:85)
>>  ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.config.impl.DataSourceConfigManager.lambda$null$2(DataSourceConfigManager.java:76)
>>  ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.config.impl.ServiceTrackerHelper$1.addingService(ServiceTrackerHelper.java:131)
>>  ~[?:?]
>>  at 
>> org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:941)
>>  ~[?:?]
>>  at 
>> org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:870)
>>  ~[?:?]
>>  at 
>> org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:256) 
>> ~[?:?]
>>  at 
>> org.osgi.util.tracker.AbstractTracked.trackInitial(AbstractTracked.java:183) 
>> ~[?:?]
>>  at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:318) 
>> ~[?:?]
>>  at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:261) 
>> ~[?:?]
>>  at 
>> org.ops4j.pax.jdbc.config.impl.ServiceTrackerHelper.track(ServiceTrackerHelper.java:140)
>>  ~[

Re: OpenJPA with AriesJPA Java.peristence

2018-05-11 Thread Tim Ward
Yes, it looks like your jta-datasource-name is wrong. The name is set to 
responder, not jdbc/responder, so I’m pretty sure that the filter in your JNDI 
name won’t match. 

Tim

Sent from my iPhone

> On 11 May 2018, at 21:07, Alex Soto  wrote:
> 
> Thank you Tim, I appreciate the help.
> Yes, an EntityManagerFactoryBuilder is registered as a service by my bundle:
> 
> karaf@root()> service:list org.osgi.service.jpa.EntityManagerFactoryBuilder
> [org.osgi.service.jpa.EntityManagerFactoryBuilder]
> --
>  osgi.unit.name = responderPersistenUnit
>  osgi.unit.provider = org.apache.openjpa.persistence.PersistenceProviderImpl
>  osgi.unit.version = 1.0.0.SNAPSHOT
>  service.bundleid = 138
>  service.id = 198
>  service.scope = singleton
> 
> DataSource is also registered:
> 
> karaf@root()> service:list javax.sql.DataSource
> [javax.sql.DataSource]
> --
>  dataSourceName = responder
>  felix.fileinstall.filename = file:x/org.ops4j.datasource-responder.cfg
>  osgi.jdbc.driver.name = mariadb
>  osgi.jndi.service.name = responder
> 
> karaf@root()> ds-list 
> Name  │ Product │ Version │ URL   
>   │ Status
> ──┼─┼─┼─┼───
> responder │ MySQL   │ 10.2.13-MariaDB │ 
> jdbc:mariadb://:3306/responder?characterEncoding=UTF-8&useServerPrepStmts=true
>  │ OK
> 
> 
> Jndi shows:
> 
> karaf@root()> jndi:names 
> JNDI Name  │ Class Name
> ───┼───
> osgi:service/responder │ org.mariadb.jdbc.MySQLDataSource
> osgi:service/jndi  │ org.apache.karaf.jndi.internal.JndiServiceImpl
> 
> My Persistence Unit is defined as:
> 
>   
>   
> org.apache.openjpa.persistence.PersistenceProviderImpl
>   
> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=jdbc/responder)
>   true
>   org.enquery.encryptedquery.responder.data.User
>   
>       
>   
>   
> 
> 
> I am thinking if the “jta-data-source” property is wrong?
> 
> 
> Best regards,
> Alex soto
> 
> 
> 
> 
>> On May 11, 2018, at 3:48 PM, Tim Ward  wrote:
>> 
>> Hi Alex,
>> 
>> So the logs you’ve sent indicate that your persistence bundle is being 
>> found, and that it’s being matched with OpenJPA. These are both good things. 
>> 
>> The next step in the process is to locate and set up the connections to the 
>> database. Depending on how you’re setting up your persistence.xml this can 
>> happen automatically, but more normally it requires configuration and/or use 
>> of the EntityManagerFactoryBuilder service. 
>> 
>> Things to check are that:
>> 
>> • you do see and EntityManagerFactoryBuilder service
>> • you’re deploying a valid database driver supporting the JDBC service
>> • your database url and driver class match the driver you’re deploying 
>> • you’re using the correct pid/unit.name
>> 
>> I hope this helps.
>> 
>> Tim
>> 
>> Sent from my iPhone
>> 
>>> On 11 May 2018, at 19:58, Alex Soto  wrote:
>>> 
>>> What is strange is that (based on the logs) it seems as if the persistence 
>>> unit is being discovered: 
>>> 
>>> 14:50:44.050 INFO [features-3-thread-1] Found persistence unit 
>>> responderPersistenUnit in bundle 
>>> org.enquery.encryptedquery.responder-data-jpa-entity-manager with provider 
>>> org.apache.openjpa.persistence.PersistenceProviderImpl.
>>> 14:50:44.052 INFO [features-3-thread-1] Found provider for 
>>> responderPersistenUnit 
>>> org.apache.openjpa.persistence.PersistenceProviderImpl
>>> 14:50:44.142 INFO [features-3-thread-1] Adding transformer 
>>> org.apache.openjpa.persistence.PersistenceProviderImpl$ClassTransformerImpl
>>> 
>>> 
>>> But the  javax.persistence.EntityManager service is not being registered, 
>>> and there is no errors.
>>> 
>>> 
>>>> On May 11, 2018, at 2:19 PM, Alex Soto  wrote:
>>>> 
>>>> Ok, I made some progress (I guess) I am no longer getting the original 
>>>> error:  java.lang.ClassCastException: 
>>>> org.apache.openjpa.persistence.PersistenceProviderImpl cannot be cast to 
>>>> javax.persistence.spi.PersistenceProvider
>&g

Re: OpenJPA with AriesJPA Java.peristence

2018-05-11 Thread Tim Ward
Hi Alex,

So the logs you’ve sent indicate that your persistence bundle is being found, 
and that it’s being matched with OpenJPA. These are both good things. 

The next step in the process is to locate and set up the connections to the 
database. Depending on how you’re setting up your persistence.xml this can 
happen automatically, but more normally it requires configuration and/or use of 
the EntityManagerFactoryBuilder service. 

Things to check are that:

• you do see and EntityManagerFactoryBuilder service
• you’re deploying a valid database driver supporting the JDBC service
• your database url and driver class match the driver you’re deploying 
• you’re using the correct pid/unit.name

I hope this helps.

Tim

Sent from my iPhone

> On 11 May 2018, at 19:58, Alex Soto  wrote:
> 
> What is strange is that (based on the logs) it seems as if the persistence 
> unit is being discovered: 
> 
> 14:50:44.050 INFO [features-3-thread-1] Found persistence unit 
> responderPersistenUnit in bundle 
> org.enquery.encryptedquery.responder-data-jpa-entity-manager with provider 
> org.apache.openjpa.persistence.PersistenceProviderImpl.
> 14:50:44.052 INFO [features-3-thread-1] Found provider for 
> responderPersistenUnit org.apache.openjpa.persistence.PersistenceProviderImpl
> 14:50:44.142 INFO [features-3-thread-1] Adding transformer 
> org.apache.openjpa.persistence.PersistenceProviderImpl$ClassTransformerImpl
> 
> 
> But the  javax.persistence.EntityManager service is not being registered, and 
> there is no errors.
> 
> 
>> On May 11, 2018, at 2:19 PM, Alex Soto  wrote:
>> 
>> Ok, I made some progress (I guess) I am no longer getting the original 
>> error:  java.lang.ClassCastException: 
>> org.apache.openjpa.persistence.PersistenceProviderImpl cannot be cast to 
>> javax.persistence.spi.PersistenceProvider
>> 
>> 
>> I added my own version of the jpa feature, in which I substitute the line
>> 
>> > dependency="true">mvn:org.eclipse.persistence/javax.persistence/2.1.0
>> 
>> 
>> With:
>> > dependency="true">mvn:org.apache.geronimo.specs/geronimo-jta_1.1_spec/1.1.1
>> 
>> Which results in:
>> 
>>  mvn:org.apache.geronimo.specs/geronimo-jpa_2.0_spec/1.1
>> > dependency="true">mvn:org.apache.geronimo.specs/geronimo-jta_1.1_spec/1.1.1
>> > dependency="true">mvn:org.osgi/org.osgi.service.jdbc/1.0.0
>> > start-level="30">mvn:org.apache.felix/org.apache.felix.coordinator/1.0.2
>> > start-level="30">mvn:org.apache.aries.jpa/org.apache.aries.jpa.api/${aries.jpa.version}
>> > start-level="30">mvn:org.apache.aries.jpa/org.apache.aries.jpa.container/${aries.jpa.version}
>> > start-level="30">mvn:org.apache.aries.jpa/org.apache.aries.jpa.support/${aries.jpa.version}
>> 
>> aries-blueprint
>> > start-level="30">mvn:org.apache.aries.jpa/org.apache.aries.jpa.blueprint/${aries.jpa.version}
>> 
>>  
>> 
>> 
>> 
>> Now, in my own feature, I have:
>> 
>>  aries-blueprint
>>  jndi
>>  jdbc
>>  transaction
>>  aries-jpa2
>>  openjpa
>>  pax-jdbc-mariadb
>> pax-jdbc-config
>> 
>> Among others.  Now my bundle fails to start:
>> 
>> Status: GracePeriod
>> Declarative Services
>> Blueprint
>> 5/11/18 2:14 PM
>> Missing dependencies: 
>> (&(osgi.unit.name=responderPersistenUnit)(objectClass=javax.persistence.EntityManager))
>>  
>> 
>> There are no errors in the log, just this unresolved dependency.
>> Any idea about why my persistent unit is not being registered?
>> 
>> Best regards,
>> Alex soto
>> 
>> 
>> 
>> 
>>>> On May 11, 2018, at 11:09 AM, Tim Ward  wrote:
>>>> 
>>>> 
>>>> 
>>>>> On 11 May 2018, at 15:53, Alex Soto  wrote:
>>>>> 
>>>>> Thanks for the help Tim.
>>>>> 
>>>> 
>>>>> On May 11, 2018, at 10:24 AM, Tim Ward  wrote:
>>>>> 
>>>>> Aries JPA can work with either JPA 2.0, or JPA 2.1, and is tested with 
>>>>> EclipseLink, Hibernate and OpenJPA. 
>>>> 
>>>> I am looking at these integration tests, but the test itself does not uses 
>>>> the feature, as defined in the feature.xml file.  It loads a different 
>>>> version of javax.persistence for the OpenJPA integration test. So u

Re: OpenJPA with AriesJPA Java.peristence

2018-05-11 Thread Tim Ward


> On 11 May 2018, at 15:53, Alex Soto  wrote:
> 
> Thanks for the help Tim.
> 
>> On May 11, 2018, at 10:24 AM, Tim Ward > <mailto:tim.w...@paremus.com>> wrote:
>> 
>> Aries JPA can work with either JPA 2.0, or JPA 2.1, and is tested with 
>> EclipseLink, Hibernate and OpenJPA. 
> 
> I am looking at these integration tests, but the test itself does not uses 
> the feature, as defined in the feature.xml file.  It loads a different 
> version of javax.persistence for the OpenJPA integration test. So unless you 
> are an AriesJPA developer, you would not know about this.  How would anybody 
> figure this out? 
> 
> @Configuration
> public Option[] configuration() {
> return new Option[] {
> baseOptions(), //
> ariesJpa20(), //
> jta11Bundles(), // Openjpa currently does not work with jta 1.2. 
> See https://issues.apache.org/jira/browse/OPENJPA-2607 
> <https://issues.apache.org/jira/browse/OPENJPA-2607>
> openJpa(), //
> derbyDSF(), //
> testBundle()
> };
> 
> Then the example does not use OpenJPA, but Hibernate, so there is no 
> information on how to make it work with OpenJPA out of the box.  
> One option here would be to have multiple specific features: jpa-hibernate, 
> jpa-openjpa, etc.

Yes, that’s pretty much what is needed, but Karaf would be the place to create 
and maintain those features.

> 
>> 
>> It is highly recommended that you use the JavaJPA contract in any of your 
>> bundles using JPA so that you are isolated from the API version number 
>> changes in the future (most Java EE specifications make major version bumps 
>> quite regularly).
>> 
> 
> I have this in my bundle’s osgi.bnd file:
> 
>   -contract: JavaJPA
> 
> Is that all that is needed?  It does not indicate version.

That is most of what is needed - you also need to be compiling against a 
library which offers the contract (for example the spec bundles provided by 
Aries). If you do that you will end up with Import-Package statements for 
javax.persistence (et al) with no version, but also a Require-Capability: 
osgi.contract;filter:=(&(osgi.contract=JavaJPA)(version=XXX)) where the XXX is 
determined from the Provide-Capability of the bundle you compiled against.

> 
>> The real problem is that the AriesJPA feature shouldn’t exist as a 
>> standalone thing (it doesn’t make sense to deploy it on its own). It should 
>> come for free when you install the OpenJPA (or Hibernate, or EclipseLink) 
>> feature, using whichever API they have deployed.
> 
> Exactly, this is harder than it should be.  When I install a feature, I 
> expect the feature to bring in all that is needed, not having to chase down 
> all these dependencies.
> Is there an intention to take this approach any time soon?

I’m afraid that would be a decision for the Karaf maintainers rather than me. 
I’m only chipping in because I’m an Aries PMC member who deals quite a bit with 
the JPA and Tx Control components.

Best Regards,

Tim Ward

> 
> 
> 
> 
>> 
>> Tim
>> 
>> 
>>> On 11 May 2018, at 14:23, Alex Soto >> <mailto:alex.s...@envieta.com>> wrote:
>>> 
>>> I had accidentally replied directly to Tim.  Repeating here:
>>> 
>>> Let me see if I understand this correctly:
>>> 
>>> Karaf version 4.2.0 enterprise repository depends on version 2.6.1 of 
>>> AriesJPA.
>>> AriesJPA version 2.6.1 depends on  javax.persistence version 2.1.0.
>>> Karaf’s enterprise repository defines a openjpa feature that depends on 
>>> OpenJPA version 2.4.2.
>>> OpenJPA version 2.4.2 depends on javax.persistence version 2.0.0.
>>>  
>>> Is this correct?
>>> Is there is a bug in the Enterprise repository mixing incompatible versions 
>>> of OpenJPA and AriesJPA?
>>> Is the problem in OpenJPA not declaring the version it depends on?
>>> 
>>> Inspecting in Karaf’s console:
>>> 
>>> karaf@root()> list
>>> 
>>>  97 │ Active  │  80 │ 2.4.2   │ OpenJPA Aggregate Ja
>>> 
>>> karaf@root()> bundle:requirements 97
>>> 
>>> osgi.wiring.package; 
>>> (&(osgi.wiring.package=javax.persistence)(version>=1.1.0)(!(version>=2.1.0)))
>>>  resolved by:
>>>osgi.wiring.package; javax.persistence 2.0.0 from 
>>> org.apache.geronimo.specs.geronimo-jpa_2.0_spec [66]
>>> 
>>> 
>>> karaf@root()> feature:info jpa
>>> Feature jpa 2.6.1
>>> Description:
>>>   OSGi Persistence Container
>

Re: OpenJPA with AriesJPA Java.peristence

2018-05-11 Thread Tim Ward
Aries JPA can work with either JPA 2.0, or JPA 2.1, and is tested with 
EclipseLink, Hibernate and OpenJPA. 

It is highly recommended that you use the JavaJPA contract in any of your 
bundles using JPA so that you are isolated from the API version number changes 
in the future (most Java EE specifications make major version bumps quite 
regularly).

The real problem is that the AriesJPA feature shouldn’t exist as a standalone 
thing (it doesn’t make sense to deploy it on its own). It should come for free 
when you install the OpenJPA (or Hibernate, or EclipseLink) feature, using 
whichever API they have deployed.

Tim


> On 11 May 2018, at 14:23, Alex Soto  wrote:
> 
> I had accidentally replied directly to Tim.  Repeating here:
> 
> Let me see if I understand this correctly:
> 
> Karaf version 4.2.0 enterprise repository depends on version 2.6.1 of 
> AriesJPA.
> AriesJPA version 2.6.1 depends on  javax.persistence version 2.1.0.
> Karaf’s enterprise repository defines a openjpa feature that depends on 
> OpenJPA version 2.4.2.
> OpenJPA version 2.4.2 depends on javax.persistence version 2.0.0.
>  
> Is this correct?
> Is there is a bug in the Enterprise repository mixing incompatible versions 
> of OpenJPA and AriesJPA?
> Is the problem in OpenJPA not declaring the version it depends on?
> 
> Inspecting in Karaf’s console:
> 
> karaf@root()> list
> 
>  97 │ Active  │  80 │ 2.4.2   │ OpenJPA Aggregate Ja
> 
> karaf@root()> bundle:requirements 97
> 
> osgi.wiring.package; 
> (&(osgi.wiring.package=javax.persistence)(version>=1.1.0)(!(version>=2.1.0))) 
> resolved by:
>osgi.wiring.package; javax.persistence 2.0.0 from 
> org.apache.geronimo.specs.geronimo-jpa_2.0_spec [66]
> 
> 
> karaf@root()> feature:info jpa
> Feature jpa 2.6.1
> Description:
>   OSGi Persistence Container
> Details:
>   JPA implementation provided by Apache Aries JPA 2.x. NB: this feature 
> doesn't provide the JPA engine, you have to install one by yourself (OpenJPA 
> for instance)
> Feature has no configuration
> Feature has no configuration files
> Feature has no dependencies.
> Feature contains followed bundles:
>   mvn:org.eclipse.persistence/javax.persistence/2.1.0
>   mvn:org.apache.geronimo.specs/geronimo-jta_1.1_spec/1.1.1
>   mvn:org.osgi/org.osgi.service.jdbc/1.0.0
>   mvn:org.apache.felix/org.apache.felix.coordinator/1.0.2 start-level=30
>   mvn:org.apache.aries.jpa/org.apache.aries.jpa.api/2.6.1 start-level=30
>   mvn:org.apache.aries.jpa/org.apache.aries.jpa.container/2.6.1 start-level=30
>   mvn:org.apache.aries.jpa/org.apache.aries.jpa.support/2.6.1 start-level=30
> 
> Best regards,
> Alex soto
> 
> 
> 
> 
>> On May 10, 2018, at 5:45 PM, Tim Ward > <mailto:tim.w...@paremus.com>> wrote:
>> 
>> OpenJPA 2.4.x supports JPA 2.0 (not 2.1) you can get the API you need from 
>> Apache Aries, as well as the JPA container. This is also all used and tested 
>> with Aries Transaction Control, so you can look at the bundles used there.
>> 
>> Best Regards,
>> 
>> Tim
>> 
>> Sent from my iPhone
>> 
>>> On 10 May 2018, at 20:43, Jean-Baptiste Onofré >> <mailto:j...@nanthrax.net>> wrote:
>>> 
>>> Anyway, let me check if OpenJPA 2.4.2 supports JPA 2.1 (it's what I 
>>> thought).
>>> 
>>> Regards
>>> JB
>>> 
>>>> On 05/10/2018 09:36 PM, Alex Soto wrote:
>>>> I am sorry I only see one version:
>>>> 
>>>> karaf@root()> feature:list | grep jpa
>>>> openjpa  │ 2.4.2│  │
>>>> Started │ enterprise-4.2.0  │ Apache OpenJPA 2.4.x
>>>> persistence engine support
>>>> camel-jpa│ 2.21.1   │   
>>>>  │ Uninstalled │ camel-2.21.1  │
>>>> deltaspike-jpa   │ 1.4.2│   
>>>>  │ Uninstalled │ org.ops4j.pax.cdi-1.0.0.RC2   │ Apache Deltaspike jpa 
>>>> support
>>>> deltaspike-jpa   │ 1.8.1│   
>>>>  │ Uninstalled │ org.ops4j.pax.cdi-1.0.0   │ Apache Deltaspike jpa 
>>>> support
>>>> jpa  │ 2.6.1│  │
>>>> Started │ aries-jpa-2.6.1   │ OSGi Persistence 
>>>> Container
>>>> 
>>>> 
>>>> 
>>>> Is there a repository I need to add?  
>>>> 
>>>> Best regards,
>>>> Alex soto
&g

Re: OpenJPA with AriesJPA Java.peristence

2018-05-10 Thread Tim Ward
OpenJPA 2.4.x supports JPA 2.0 (not 2.1) you can get the API you need from 
Apache Aries, as well as the JPA container. This is also all used and tested 
with Aries Transaction Control, so you can look at the bundles used there.

Best Regards,

Tim

Sent from my iPhone

> On 10 May 2018, at 20:43, Jean-Baptiste Onofré  wrote:
> 
> Anyway, let me check if OpenJPA 2.4.2 supports JPA 2.1 (it's what I thought).
> 
> Regards
> JB
> 
>> On 05/10/2018 09:36 PM, Alex Soto wrote:
>> I am sorry I only see one version:
>> 
>> karaf@root()> feature:list | grep jpa
>> openjpa  │ 2.4.2│  │
>> Started │ enterprise-4.2.0  │ Apache OpenJPA 2.4.x
>> persistence engine support
>> camel-jpa│ 2.21.1   │   
>>   │ Uninstalled │ camel-2.21.1  │
>> deltaspike-jpa   │ 1.4.2│   
>>   │ Uninstalled │ org.ops4j.pax.cdi-1.0.0.RC2   │ Apache Deltaspike jpa 
>> support
>> deltaspike-jpa   │ 1.8.1│   
>>   │ Uninstalled │ org.ops4j.pax.cdi-1.0.0   │ Apache Deltaspike jpa 
>> support
>> jpa  │ 2.6.1│  │
>> Started │ aries-jpa-2.6.1   │ OSGi Persistence Container
>> 
>> 
>> 
>> Is there a repository I need to add?  
>> 
>> Best regards,
>> Alex soto
>> 
>> 
>> 
>>> On May 10, 2018, at 3:25 PM, Jean-Baptiste Onofré >> > wrote:
>>> 
>>> Karaf provides both jpa 1.x and  2.x feature.
>>> 
>>> You just have to  install the right one depending of the engine you are 
>>> using:
>>> 
>>> feature:install jpa/1.x
>>> feature:install  openjpa
>>> 
>>> Regards
>>> JB
>>> 
 On 05/10/2018 09:23 PM, Alex Soto wrote:
 Thanks JB,
 
 I was hoping to use whatever was defined in the Karaf’s enterprise feature,
 but if that doesn’t work ,then which version do I need?  I am afraid if I
 deviate from the versions selected by Kara’s Enterprise feature I will get
 into more version mismatch problems.   Also what do I put in my POM for
 javax.persistence dependency?
 
 
 Best regards,
 Alex soto
 
 
 
> On May 10, 2018, at 3:16 PM, Jean-Baptiste Onofré  > wrote:
> 
> Hi,
> 
> OpenJPA 2.x still uses JPA 1.x. By default, jpa feature will provide 2.x
> version.
> 
> You should specify the jpa feature version.
> 
> Regards
> JB
> 
>> On 05/10/2018 09:08 PM, Alex Soto wrote:
>> Hello,
>> 
>> I am running Karaf 4.2.0, trying to setup a project with OpenJPA.  I am 
>> getting
>> error:
>> 
>> 
>> 14:44:07.799 ERROR [FelixDispatchQueue] FrameworkEvent ERROR
>> - org.apache.aries.jpa.container
>> java.lang.ClassCastException:
>> org.apache.openjpa.persistence.PersistenceProviderImpl
>> cannot be cast to javax.persistence.spi.PersistenceProvider
>> at
>> org.apache.aries.jpa.container.impl.PersistenceProviderTracker.addingService(PersistenceProviderTracker.java:84)
>> ~[?:?]
>> at
>> org.apache.aries.jpa.container.impl.PersistenceProviderTracker.addingService(PersistenceProviderTracker.java:44)
>> ~[?:?]
>> at
>> org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:941)
>> ~[?:?]
>> at
>> org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:870)
>> ~[?:?]
>> at 
>> org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:256)
>> ~[?:?]
>> at 
>> org.osgi.util.tracker.AbstractTracked.trackInitial(AbstractTracked.java:183)
>> ~[?:?]
>> at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:318) 
>> ~[?:?]
>> at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:261) 
>> ~[?:?]
>> at
>> org.apache.aries.jpa.container.impl.PersistenceBundleTracker.trackProvider(PersistenceBundleTracker.java:103)
>> ~[?:?]
>> at
>> org.apache.aries.jpa.container.impl.PersistenceBundleTracker.findPersistenceUnits(PersistenceBundleTracker.java:87)
>> ~[?:?]
>> at
>> org.apache.aries.jpa.container.impl.PersistenceBundleTracker.addingBundle(PersistenceBundleTracker.java:66)
>> ~[?:?]
>> at
>> org.apache.aries.jpa.container.impl.PersistenceBundleTracker.addingBundle(PersistenceBundleTracker.java:39)
>> ~[?:?]
>> at
>> org.osgi.util.tracker.BundleTracker$Tracked.customizerAdding(BundleTracker.java:469)
>> ~[?:?]
>> at
>> org.osgi.util.tracker.BundleTracker$Tracked.customizerAdding(BundleTracker.java:415)
>> ~[?:?]
>> at 
>> org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:256)
>> ~[?:?]
>> at org.osgi.util.tracker.AbstractTracked.track(AbstractTracked.java:229) 
>> ~[?:?]
>> at
>> org.osgi.util.

Re: Recommanded way to use an XML parser in OSGi ?

2018-03-27 Thread Tim Ward
Given that we’re now resorting to class loader trickery as a workaround, 
wouldn’t it actually make more sense to use the specification which solves this 
properly? The object in question is already a DS component, so requiring a 
service is about as low effort as possible, and there must be a suitably 
packaged DocumentBuilder implementation somewhere in Karaf?

Tim

> On 27 Mar 2018, at 17:53, Kerry  wrote:
> 
> I'm a bit puzzled why it works with a Karaf command as this will be executed 
> with a differing classloader but as I don't have your code it's hard for me 
> to visualise your imports etc correctly.
> 
> I would temporarily change the classloader to that of a class that is in your 
> bundle, typically I would make it that of the class which is the entry point 
> from the 'outside world' into your bundle. Don't forget to reset the 
> classloader before the thread leaves your bundle and do this inside a finally 
> clause.
> 
> Kerry
> 
> On 27/03/18 22:16, Nicolas Brasey wrote:
>> Ideally I would like to avoid overloading too much from the vanilla karaf to 
>> avoid too much dependencies to ease the future karaf version upgrades, but 
>> since I'm blocked at the moment I can give it a try. The best for me would 
>> be to find a nasty workaround with the classloader just to make it work. 
>> Which classloader can I set on the current thread to make this work?
>> 
>> Thanks a lot Guillaume for your help!
>> 
>> Cheers,
>> Nicolas
>> 
>> 
>> 
>> On Tue, Mar 27, 2018 at 8:09 PM, Guillaume Nodet > > wrote:
>> You're right, I think the current setup on 4.1 is not optimal and a bit too 
>> sensitive to the thread context class loader.
>> Can you simply remove the xalan and xerces jars from the lib/endorsed folder 
>> and the corresponding export packages in etc/config.properties ?
>> If you want to use xalan and xerces, deploy them as bundles instead.
>> Also, please raise a JIRA so that we can fix that.
>> 
>> 2018-03-27 18:35 GMT+02:00 Nicolas Brasey > >:
>> Yes that was also my understanding and this is what I'm doing. Like I said 
>> before, it works well from the karaf command but not when the call is 
>> initiated from somewhere else. 
>> 
>> 
>> 
>> On Tue, Mar 27, 2018 at 6:21 PM, Guillaume Nodet > > wrote:
>> In Karaf, we ensure that you can use DocumentBuilderFactory#newInstance().
>> That's the standard java api to create a Parser and it works well in Karaf.
>> 
>> 2018-03-27 18:11 GMT+02:00 Nicolas Brasey > >:
>> Hi Guillaume,
>> 
>> Thanks for those infos. I'm running Karaf 4.1.2.  So I tried to lookup a 
>> DocumentBuilderFactory service but no one are availably in standard in this 
>> karaf version.
>> 
>> What do you exactly mean by "it should already work" ? If it can help, I 
>> actually based my implementation on the example I found in Karaf, in the Kar 
>> service implementation, the class FeatureDetector. It is parsing an XML 
>> feature file, which is exactly what I'm trying to do as well. My 
>> implementation is 1 to 1.
>> 
>> So the code seems perfectly working when a karaf command is calling it, but 
>> badly failing when coming from a jetty thread (REST endpoint). 
>> 
>> I would be happy to get away with a nasty workaround, I'm not looking by a 
>> by-the-book implementation :-)
>> 
>> Thanks again!
>> Nicolas   
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> On Tue, Mar 27, 2018 at 5:02 PM, Guillaume Nodet > > wrote:
>> Here's the way to solve the problem for Karaf 4.2, hopefully I can merge it 
>> before the release is done:
>>   https://github.com/apache/karaf/pull/481 
>> 
>> 
>> For 4.1, the distribution should already work.
>> 
>> The OSGi 133 (Service Loader) and 702 (XML Parser) are clearly not 
>> sufficient when working with libraries that have not been built solely for 
>> OSGi and which use the standard way to use the XML apis.
>> 
>> Fwiw, the openjdk code contains code specific to glassfish osgi environment, 
>> in a similar way than the above PR.
>> 
>> 
>> 2018-03-27 16:57 GMT+02:00 Nicolas Brasey > >:
>> Hi Kerry,
>> 
>> Yes it executes in another thread (jetty http executor thread pool), so the 
>> context is different. 
>> 
>> The code actually fails quite deep in the abyss of the java service loader:
>> 
>>  
>> Caused by: java.util.ServiceConfigurationError: 
>> javax.xml.parsers.DocumentBuilderFactory: Provider 
>> org.apache.xerces.jaxp.DocumentBuilderFactoryImpl not found
>>  at java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:?]
>> 
>> 
>> I should switch the current thread classloader to use the classloader of the 
>> class java.util.ServiceLoader?
>> 
>> Thanks!!
>> Nicolas
>> 
>> 
>> 
>> On Tue, Mar 27, 2018 at 3:38 PM, Kerry > > wrote:
>> Hazarding a guess at this but when it fails when called by t

Re: Recommanded way to use an XML parser in OSGi ?

2018-03-27 Thread Tim Ward
The recommendation would almost certainly be to use the XML Parser Service. You 
can see it in the spec at 
https://osgi.org/specification/osgi.cmpn/7.0.0/util.xml.html 
 - note that this 
spec is pretty old, so all the examples use the low-level OSGi framework API.

Best Regards,

Tim

> On 27 Mar 2018, at 09:17, Nicolas Brasey  wrote:
> 
> Hi,
> 
> I'm feeling frustrated because like everytime I'm adventuring with XML in an 
> OSGi context, I end up with classloading issues, and this time is no 
> exception :-) So I would like to know what/how you guys are doing it...
> 
> My use case is extremely simple, yet I can't figure out what I'm doing wrong. 
> I need to use an XML parser to get a Document object from an XML file. This 
> XML parsing code is embedded inside a service (DS). The weird thing is that 
> If I invoke this service with a karaf command, then it works fine. If the 
> same code is invoked through a REST endpoint (another bundle), then I get the 
> following class not found:
> 
> Caused by: java.util.ServiceConfigurationError: 
> javax.xml.parsers.DocumentBuilderFactory: Provider 
> org.apache.xerces.jaxp.DocumentBuilderFactoryImpl not found
>   at java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:?]
>   at java.util.ServiceLoader.access$300(ServiceLoader.java:185) ~[?:?]
>   at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) 
> ~[?:?]
> 
> 
> AFAIK, Karaf is pulling the servicemix implementation of Xerces, and I 
> doubled check that the package is available in karaf:
> 
> 
> dms@root>exports | grep org.apache.xerces.jaxp
> org.apache.xerces.jaxp.datatype   
>  │ 2.11.0 │ 348 │ 
> org.apache.servicemix.bundles.xerces
> org.apache.xerces.jaxp.validation 
>  │ 2.11.0 │ 348 │ 
> org.apache.servicemix.bundles.xerces
> org.apache.xerces.jaxp
>  │ 2.11.0 │ 348 │ 
> org.apache.servicemix.bundles.xerces
> 
> 
> 
> So, I don't know what I'm doing wrong here.
> 
> Any clue ?
> 
> 
> Thanks,
> Nicolas
> 



Re: Regarding Configuration Types

2018-02-26 Thread Tim Ward
Scott,

Those two issues are actually different things. The first one is to do with the 
defaulting of configuration, the second is to do with how File Install 
interprets property values with trailing spaces.

As for why you don’t get the default - your configuration explicitly contains a 
mapping to the empty string. Defaults only apply when the key isn’t present, if 
it is present then you get the supplied value.

Best Regards,

Tim

> On 25 Feb 2018, at 21:14, Ryan Moquin  wrote:
> 
> If trimming was performed by default on tproperties, wouldn't that place a 
> little extra performance overhead into OSGi vs developers just making sure 
> there isn't trailing whitespace if they don't want it?  It wouldn't be much, 
> but it adds up...
> 
> Ryan
> 
> On Thu, Feb 22, 2018 at 12:52 PM Leschke, Scott  > wrote:
> I have a configuration type that has a fragment in it as shown below.
> 
>  
> 
> @ProviderType
> 
> @ObjectClassDefinition(name = "Provider Configuration")
> 
> public @interface MetricProviderConfig
> 
> {
> 
>String schedule() default "0";
> 
> }
> 
>  
> 
> If the associated property in a .cfg file exists but has no value, as in:
> 
>  
> 
> schedule =
> 
>  
> 
> I get the null string “” as opposed to the default which is what I would 
> expect. While this is preferable to a null, which I got at some on some 
> earlier Karaf release, I would think that you’d get the default whether the 
> property didn’t exist or existed with no value.
> 
>  
> 
> Another comment, which perhaps is more general to OSGi in this area, is that 
> properties aren’t trimmed. I honestly can’t think of a use case where 
> somebody would want trailing white space passed in.  Also, if the 
> configuration type exposes an enumeration, an error occurs.
> 
>  
> 
> @ProviderType
> 
> @ObjectClassDefinition(name = "Provider Configuration")
> 
> public @interface MetricProviderConfig
> 
> {
> 
>MyEnum enumValue() default MyEnum.ENUM_VALUE;
> 
> }
> 
>  
> 
> So the first property below works, but the second one doesn’t.  Is this by 
> design?
> 
>  
> 
> enumValue = ENUM_VALUE
> 
> enumValue = ENUM_VALUE
> 
>  
> 
> Regards,
> 
>  
> 
> Scott
> 



Re: Bundle start is an asynchronous call

2018-01-31 Thread Tim Ward
You must never write code like this in OSGi. It is a serious error to assume 
that a service will be available immediately after a bundle has started, you 
must always listen for the service becoming available (which may happen a long 
time in the future).

I’m not sure what it is that you’re doing which requires you to use the 
low-level API like this. I would not recommend directly interacting with the 
bundle context, but instead using a dependency management container such as 
Declarative Services to inject you with the service when it becomes available. 
Using the low level API is typically a good way to ensure that your code is 
complex and unreadable.

Regards,

Tim

> On 31 Jan 2018, at 13:59, SAI3292  wrote:
> 
> Hi 
> 
> I am trying the update the resolved bundle and then starting it which is
> providing implementation for Interface I. Afterwards immediately getting the
> service reference for the Interface I. Interface I is in different module
> which is started.
> Following is the code:
> 
> Bundle bundle = bundleContext.getBundle(mvn:path);
> bundle.update();
> bundle.start();
> Collection> referenceList = bundleContext
>   
> .getServiceReferences(I.class, null);
> 
> Is that when i am getting service reference of the interface, the bundle
> start is not yet registered the implementation of the Interface I in
> Activator.  Is that an Asynchronous call.
> 
> Can anybody help
> 
> Regards
> Sai
> 
> 
> 
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html



Re: Why does Karaf say this is a uses constraint?

2018-01-29 Thread Tim Ward
The JClouds API sucks for this reason. They always return/receive a guava 
collection type rather than just using guava to generate the collections. It’s 
a leaky abstraction nightmare!

Tim

Sent from my iPhone

> On 29 Jan 2018, at 15:58, Ryan Moquin  wrote:
> 
> Francois, JB:
> 
> I figured it out shortly after writing this email.  This issue was caused by 
> a combination of a small goof on my side and a misconception on what version 
> of Guava is supported by JClouds.  My goof was that I forgot that in order to 
> programmatically work with JClouds, I had to use a couple Guava classes since 
> JClouds heavily uses it.  As a result of that, my library had an import 
> statement in it's bundle that I forgot about (I was trying to not have to use 
> any Guava stuff, and forgot I couldn't get around it a few days previously).  
> The other part of this was that I thought JClouds had 18.0 - 20.0 set as the 
> valid ranges for Guava, but it's actually 16.0 - 19.0 but since I had 20.0 as 
> the default version in dependency management in my POM, my bundle using 
> JClouds was expecting 20.0 as the only valid version, but JClouds won't wire 
> to that version, hence the error message.  Unfortunately these uses 
> constraint violation errors are very unclear in the majority of cases.  Since 
> this was really about Guava 20.0 being imported by my bundle and JClouds 
> binding to a version 19.0 or below.
> 
>> On Fri, Jan 26, 2018 at 12:35 PM Francois Papon 
>>  wrote:
>> Hi Ryan,
>> 
>> Can you share your pom.xml and your feature.xml ?
>> 
>> François
>> 
>>> Le 26/01/2018 à 20:55, Ryan Moquin a écrit :
>>> My bundle depends on an rdf4j bundle which has to use Guava 18.0, and it 
>>> also depends on a bundle that has to use Guava 20.0.  I have it set to 
>>> Guava 20.0 in my rkmoquin common bundle.  I am actually trying to install 
>>> an overall feature which includes some other features.  It had been working 
>>> fine until I added Jclouds to the mix which I think uses a version range 
>>> for Guava.  I guess this is because I have a feature which depends on two 
>>> other features which each depend on different versions of Guava.  The main 
>>> bundle for my feature also uses Guava as a dependency but specifies and 
>>> installs a certain version of it.  It seems like even if my bundle 
>>> specifies a certain version of Guava, because other classes used in that
>>>  bundle "uses" classes that could bind to other Guava versions, then 
>>> the class space can't be guaranteed to be the same version of Guava 
>>> dependencies (if that makes any sense)?
>>> 
>>> I don't think my bundle has to use Guava 20.0, it was the best version to 
>>> depend on that didn't result in uses constraints with other bundle 
>>> dependencies previously.  Adding JClouds with it's Guava dependency range 
>>> maybe causes uncertainty in the class space?  So guess there could be a mix 
>>> of packages which are in the same class path but "uses" different versions 
>>> of a package. 
>>> 
>>> I can't do a refresh since the feature install fails and then the problem 
>>> bundle isn't installed...I guess if I manually install the dependencies, I 
>>> can then use Karaf to see how it is trying to wire then.  I would think 
>>> this is where making certain dependant features a "prerequisite" might help 
>>> with that, but doing that seems to cause things to go crazy when I try that 
>>> :). I don't think I understand the prerequisite and dependency attributes 
>>> for features.
>>> 
>>> Ryan
>>> 
>>> 
>>> 
 On Fri, Jan 26, 2018 at 10:15 AM Jean-Baptiste Onofré  
 wrote:
 Hi Ryan,
 
 can you try a refresh ?
 
 What's your import in the rkmoquin.common bundle ?
 
 Regards
 JB
 
 On 01/26/2018 03:40 PM, Ryan Moquin wrote:
 > I keep running into situations where I get a uses constraint, but the 
 > complaint
 > is talking about an import and export chain that involve the exact same
 > dependency, such as with Guava below... why is this a uses constraint 
 > and how do
 > you deal with it?
 >
 > Error executing command: Uses constraint violation. Unable to resolve 
 > resource
 > com.rkmoquin.common [com.rkmoquin.common/1.0.0.SNAPSHOT] because it is 
 > exposed
 > to package 'com.google.common.base' from resources com.google.guava
 > [com.google.guava/20.0.0] and com.google.guava [com.google.guava/20.0.0] 
 > via two
 > dependency chains.
 >
 > Chain 1:
 >   com.rkmoquin.common [com.rkmoquin.common/1.0.0.SNAPSHOT]
 > import:
 > (&(osgi.wiring.package=com.google.common.base)(version>=20.0.0)(!(version>=21.0.0)))
 >  |
 > export: osgi.wiring.package: com.google.common.base
 >   com.google.guava [com.google.guava/20.0.0]
 >
 > Chain 2:
 >   com.rkmoquin.common [com.rkmoquin.common/1.0.0.SNAPSHOT]
 > import:
 > (&(osgi.wiring.package=com.google.common.collect)(versi

Re: Transaction Control

2018-01-18 Thread Tim Ward
Hi Scott,

Those are the two bundles that you need. At a guess Karaf is unhappy because 
the Aries bundles substitutably export the API packages that they need 
(including config admin) for ease of deployment. When installing bundles Karaf 
sometimes attempts some interesting package rewiring operations and kills its 
own console, which is (I guess) what’s happening here. It’s hard to be sure 
though…

Tim

> On 16 Jan 2018, at 21:04, Leschke, Scott  wrote:
> 
> I’ve been trying to integrate this into my app. I just pulled down the latest 
> (0.0.3) jars from Maven Central.  As far as I can tell, the two bundles I 
> need are
> tx-control-service-local
> tx-control-provider-jdbc-local
>  
> I can drop the first one into my deploy directory and it tells me it needs 
> the second.  As soon as I try to deploy the second jar, the console stops 
> recognizing commands or hangs (at least that’s the appearance).  The  
> Following is what I see in the log.  Note that I’m using 4.2.0.M2.
>  
>  
> 2018-01-16T14:38:17,751 | INFO  | fileinstall-c:/bam | fileinstall
>   | 9 - org.apache.felix.fileinstall - 3.6.4 | Installing bundle 
> tx-control-provider-jdbc-local / 0.0.3
> 2018-01-16T14:38:17,813 | INFO  | FelixFrameworkWiring | CommandExtension 
> | 33 - org.apache.karaf.shell.core - 4.2.0.M2 | Unregistering 
> commands for bundle org.apache.karaf.event/4.2.0.M2
> 2018-01-16T14:38:17,820 | INFO  | FelixFrameworkWiring | CommandExtension 
> | 33 - org.apache.karaf.shell.core - 4.2.0.M2 | Unregistering 
> commands for bundle org.apache.karaf.features.command/4.2.0.M2
> 2018-01-16T14:38:17,828 | INFO  | FelixFrameworkWiring | Activator
> | 6 - org.ops4j.pax.logging.pax-logging-api - 1.10.1 | Disabling 
> SLF4J API support.
> 2018-01-16T14:38:17,828 | INFO  | activator-1-thread-2 | CommandExtension 
> | 33 - org.apache.karaf.shell.core - 4.2.0.M2 | Unregistering 
> commands for bundle org.apache.karaf.kar.core/4.2.0.M2
> 2018-01-16T14:38:17,828 | INFO  | FelixFrameworkWiring | Activator
> | 6 - org.ops4j.pax.logging.pax-logging-api - 1.10.1 | Disabling 
> Jakarta Commons Logging API support.
> 2018-01-16T14:38:17,829 | INFO  | FelixFrameworkWiring | Activator
> | 6 - org.ops4j.pax.logging.pax-logging-api - 1.10.1 | Disabling 
> Log4J API support.
> 2018-01-16T14:38:17,829 | INFO  | FelixFrameworkWiring | Activator
> | 6 - org.ops4j.pax.logging.pax-logging-api - 1.10.1 | Disabling 
> Avalon Logger API support.
> 2018-01-16T14:38:17,829 | INFO  | FelixFrameworkWiring | Activator
> | 6 - org.ops4j.pax.logging.pax-logging-api - 1.10.1 | Disabling 
> JULI Logger API support.
> 2018-01-16T14:38:17,829 | INFO  | activator-1-thread-1 | 
> HttpServiceFactoryImpl   | 105 - org.ops4j.pax.web.pax-web-runtime - 
> 6.1.0 | Unbinding bundle: [org.apache.karaf.webconsole.features [111]]
> 2018-01-16T14:38:17,830 | INFO  | FelixFrameworkWiring | Activator
> | 6 - org.ops4j.pax.logging.pax-logging-api - 1.10.1 | Disabling 
> Log4J v2 API support.
> 2018-01-16T14:38:17,831 | INFO  | activator-1-thread-1 | FeaturesPlugin   
> | 111 - org.apache.karaf.webconsole.features - 4.2.0.M2 | 
> Features plugin deactivated
> 2018-01-16T14:38:17,870 | INFO  | FelixFrameworkWiring | core 
> | 12 - org.apache.aries.jmx.core - 1.1.7 | Unregistering 
> org.osgi.jmx.framework.BundleStateMBean to MBeanServer 
> org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@700031b9 
> with name 
> osgi.core:type=bundleState,version=1.7,framework=org.apache.felix.framework,uuid=cac40ac4-12dd-4cb5-8ded-1f73370df0aa
> 2018-01-16T14:38:17,870 | INFO  | FelixFrameworkWiring | core 
> | 12 - org.apache.aries.jmx.core - 1.1.7 | Unregistering 
> org.osgi.jmx.service.cm.ConfigurationAdminMBean to MBeanServer 
> org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@700031b9 
> with name 
> osgi.compendium:service=cm,version=1.3,framework=org.apache.felix.framework,uuid=cac40ac4-12dd-4cb5-8ded-1f73370df0aa
> 2018-01-16T14:38:17,870 | INFO  | FelixFrameworkWiring | core 
> | 12 - org.apache.aries.jmx.core - 1.1.7 | Unregistering 
> org.osgi.jmx.framework.FrameworkMBean to MBeanServer 
> org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@700031b9 
> with name 
> osgi.core:type=framework,version=1.7,framework=org.apache.felix.framework,uuid=cac40ac4-12dd-4cb5-8ded-1f73370df0aa
> 2018-01-16T14:38:17,870 | INFO  | FelixFrameworkWiring | core 
> | 12 - org.apache.aries.jmx.core - 1.1.7 | Unregistering 
> org.osgi.jmx.framework.ServiceStateMBean to MBeanServer 
> org.apache.karaf.management.internal.EventAdminMBeanServerWrapper@700031b9 
> with name 
> osgi.core:type=serviceState,version=1.7,framework

Re: [osgi-dev] Create instance of factory configuration at runtime

2017-12-18 Thread Tim Ward
 to point 2)
> about 2-3 years ago I had a similar problem. The 'solution' was to create a 
> configuration file (with the properties) in the 
> 'felix.fileinstall.dir'-Directory. If it is a Factory the filename should be 
> -.cfg
> After some time (depending on configuration) FileInstall will scan the file, 
> create the configuration and the service will be instantiated.
>  
> ps. I have just checked, the 'hack' still exists and is live
>  
> Michael
>  
> Am 13.12.2017 um 22:24 schrieb Leschke, Scott via osgi-dev 
> mailto:osgi-...@mail.osgi.org>>:
>  
> Hey Tim,
>  
> Thanks for this. Yes that worked. I recall seeing something about this a 
> number of months ago but didn’t pay much attention since it didn’t apply to 
> me at the time. Not sure why I didn’t think to give that a try though.
>  
> Two additional questions for anybody who cares to answer.
>  
> 1)  If “?” isn’t used, what would the location argument look like?  Is it 
> like a bundle symbolic id, with wildcards perhaps? It’s unclear to me how 
> this would be used even if you wanted to.
> 2)  Now for a Karaf question to any/all takers. When a service instance 
> is created this way, is there a way to associate a .cfg file with it so that 
> the service configuration will persist across Karaf upgrades?  I know that if 
> a Configuration record is updated, the service’s corresponding .cfg file is 
> updated, but if you create a new service, you don’t get a .cfg file.
>  
> Scott
>  
> From: Tim Ward [mailto:tim.w...@paremus.com <mailto:tim.w...@paremus.com>] 
> Sent: Saturday, December 09, 2017 3:04 AM
> To: Leschke, Scott; OSGi Developer Mail List
> Subject: Re: [osgi-dev] Create instance of factory configuration at runtime
>  
> Hi Scott,
>  
> That does work, but Configuration Admin has an old feature called location 
> binding. This feature prevents a configuration being delivered to bundles 
> other than the bundle with the specified bundle location. 
>  
> The one-arg version of createFactoryConfiguration that you’re using defaults 
> the bundle location to the location of the bundle which got the 
> ConfigurationAdmin service instance that you’re using. This is almost never 
> the correct location as it usually means only the management bundle can see 
> your configuration.
>  
> The location binding behaviour is so annoying that the general recommendation 
> is to disable it by using the two arg versions of Config Admin methods with a 
> wildcard location binding (a “?”).
>  
> My guess is that the two arg version will give you what you’re looking for. 
>  
> Tim
> 
> Sent from my iPhone
> 
> On 9 Dec 2017, at 00:36, Leschke, Scott via osgi-dev  <mailto:osgi-...@mail.osgi.org>> wrote:
> 
> How does one create a new instance of a factory configuration 
> programmatically?
>  
> I thought it was like
>  
> ConfigurationAdmin ca;
> ca.createFactoryConfiguration(“my.configuration.pid”).update(newServiceProps);
>  
> but that doesn’t seem to work for me.
>  
> Thanks in advance,
>  
> Scott
> ___
> OSGi Developer Mail List
> osgi-...@mail.osgi.org <mailto:osgi-...@mail.osgi.org>
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> <https://mail.osgi.org/mailman/listinfo/osgi-dev>
> ___
> OSGi Developer Mail List
> osgi-...@mail.osgi.org <mailto:osgi-...@mail.osgi.org>
> https://mail.osgi.org/mailman/listinfo/osgi-dev 
> <https://mail.osgi.org/mailman/listinfo/osgi-dev>


Re: Recommended CDI tool, Blueprint / DS / Dependency Management / Low level API?

2017-12-18 Thread Tim Ward
Hi,

> I would say, try DS/SCR first and if you have some limitation, you can mix 
> with blueprint.

I definitely agree with the general statement that people should start with DS. 
Using the low level API correctly is *hard*, and many of the examples using it 
are bad. In the application space there are are few, if any, reasons to choose 
the low-level API. One other point in favour of DS is that it gets regularly 
updated in the OSGi specifications. Blueprint may be a standard, but the 
standard hasn’t been updated to cover configuration, prototype scoped services, 
or any new OSGi features from the last five years.

The other good point that JB is making (which may have gone unnoticed) is that 
you don’t have to choose the same injection container for all of your bundles. 
Because all bundles should interact using OSGi services you can use different 
injection containers in different bundles and know that those bundles will 
still work together - the way that they are wired together internally is hidden.

Regards,

Tim Ward

OSGi IoT EG chair
tim.w...@paremus.com <mailto:tim.w...@paremus.com>



> On 18 Dec 2017, at 06:56, Jean-Baptiste Onofré  <mailto:j...@nanthrax.net>> wrote:
> 
> Hi,
> 
> Not easy question as it depends what you want to do ;)
> 
> Blueprint has the proxy (convenient), DS is more dynamic and closer to OSGi 
> (good), "pure" OSGi is always working (but lot of boilerplate code).
> 
> My experience is for an end-user application, DS is interesting. For more 
> framework/low level application, OSGi can be a serious candidate.
> 
> I would say, try DS/SCR first and if you have some limitation, you can mix 
> with blueprint.
> 
> Regards
> JB
> 
> On 12/17/2017 09:59 PM, Guenther Schmidt wrote:
>> Hello All,
>> what is the recommended CDI tool?
>> Should I use
>>  * the low level API (BundleActivator),
>>  * Felix Dependency Management,
>>  * Blueprint,
>>  * or annotation based DS?
>> I'd rather not use the first two options, i don't want to buy into anything 
>> non-standard.
>> Guenther
> 
> -- 
> Jean-Baptiste Onofré
> jbono...@apache.org <mailto:jbono...@apache.org>
> http://blog.nanthrax.net
> Talend - http://www.talend.com



Re: Adding an @Activate to a DS bundle causes the bundle not to load

2017-12-04 Thread Tim Ward
My educated guess...

By adding the activate method you have increased the required version of DS 
detected by bnd. This, in turn, has probably added a Require-Capability for the 
service that you import. This has no effect at runtime (due to the value of its 
effective directive) but if the service exporter does not have a corresponding 
Provide-Capability then it may have broken the Karaf feature resolver, which 
would stop your feature from deploying with the error that you see

In bnd you can fix this using repository augments (as it’s only a resolve-time 
issue). I don’t know whether Karaf has a similar feature. 

The other fix is to make sure that your Postgres driver correctly advertises 
its service capabilities using Provide-Capability.

Tim

Sent from my iPhone

> On 4 Dec 2017, at 22:15, Steinar Bang  wrote:
> 
> Platform: Java 1.8, karaf 4.1.3
> 
> I have the following DS component that exposes a Servlet to the Pax Web
> Whiteboard Extender:
> https://github.com/steinarb/sonar-collector/blob/master/sonar-collector-webhook/src/main/java/no/priv/bang/sonar/collector/webhook/SonarCollectorServlet.java#L55
> 
> The component starts fine, and exposes a Servlet service that is picked
> up by the whiteboard extender, and as far as I can tell, it does what it
> is expected to do (receive POSTs from SonarQube/SonarCloud and store
> build statistics in a PostgreSQL database).
> 
> However, if I add an empty activate method, like so:
> @Component(service={Servlet.class}, property={"alias=/sonar-collector"} )
> public class SonarCollectorServlet extends HttpServlet {
> ...
> @Activate
> public void activate(Map config) {
> }
> ...
> }
> 
> then the component fails to load, because of missing dependencies:
> karaf@root()> feature:repo-add 
> mvn:no.priv.bang.sonar.sonar-collector/sonar-collector-webhook/LATEST/xml/features
> Adding feature url 
> mvn:no.priv.bang.sonar.sonar-collector/sonar-collector-webhook/LATEST/xml/features
> karaf@root()> feature:install sonar-collector-webhook
> Error executing command: Unable to resolve root: missing requirement [root] 
> osgi.identity; osgi.identity=sonar-collector-webhook; type=karaf.feature; 
> version="[1.0.0.SNAPSHOT,1.0.0.SNAPSHOT]"; 
> filter:="(&(osgi.identity=sonar-collector-webhook)(type=karaf.feature)(version>=1.0.0.SNAPSHOT)(version<=1.0.0.SNAPSHOT))"
>  [caused by: Unable to resolve sonar-collector-webhook/1.0.0.SNAPSHOT: 
> missing requirement [sonar-collector-webhook/1.0.0.SNAPSHOT] osgi.identity; 
> osgi.identity=no.priv.bang.sonar.sonar-collector-webhook; type=osgi.bundle; 
> version="[1.0.0.SNAPSHOT,1.0.0.SNAPSHOT]"; resolution:=mandatory [caused by: 
> Unable to resolve no.priv.bang.sonar.sonar-collector-webhook/1.0.0.SNAPSHOT: 
> missing requirement 
> [no.priv.bang.sonar.sonar-collector-webhook/1.0.0.SNAPSHOT] osgi.service; 
> effective:=ac
> tive; filter:="(objectClass=org.osgi.service.jdbc.DataSourceFactory)"]]
> karaf@root()>
> 
> If remove the "@Activate" annotation, the component loads again.
> 
> Does anyone know what might cause this?
> 
> What's strange about this, that the missing depenency the error message
> complains about, ie. org.osgi.service.jdbc.DataSourceFactory, is
> essential to the servlet's operation.  Without a DataSourceFactory, no
> database can be contacted and no data can be saved (and data _is_ saved).
> 
> Is the error message because the bundle can't find the type
> org.osgi.service.jdbc.DataSourceFactory? Or is the message about not
> getting an instance of org.osgi.service.jdbc.DataSourceFactory?
> 
> The full error message from karaf.log below.
> 
> Thanks!
> 
> 
> - Steinar
> 
> Error message from karaf.log follows:
> 
> 2017-12-04T20:28:57,555 | ERROR | Karaf local console user karaf | ShellUtil  
>   | 42 - org.apache.karaf.shell.core - 4.1.3 | Exception 
> caught while executing command
> org.osgi.service.resolver.ResolutionException: Unable to resolve root: 
> missing requirement [root] osgi.identity; 
> osgi.identity=sonar-collector-webhook; type=karaf.feature; 
> version="[1.0.0.SNAPSHOT,1.0.0.SNAPSHOT]"; 
> filter:="(&(osgi.identity=sonar-collector-webhook)(type=karaf.feature)(version>=1.0.0.SNAPSHOT)(version<=1.0.0.SNAPSHOT))"
>  [caused by: Unable to resolve sonar-collector-webhook/1.0.0.SNAPSHOT: 
> missing requirement [sonar-collector-webhook/1.0.0.SNAPSHOT] osgi.identity; 
> osgi.identity=no.priv.bang.sonar.sonar-collector-webhook; type=osgi.bundle; 
> version="[1.0.0.SNAPSHOT,1.0.0.SNAPSHOT]"; resolution:=mandatory [caused by: 
> Unable to resolve no.priv.bang.sonar.sonar-collector-webhook/1.0.0.SNAPSHOT: 
> missing requirement 
> [no.priv.bang.sonar.sonar-collector-webhook/1.0.0.SNAPSHOT] osgi.s
> ervice; effective:=active; 
> filter:="(objectClass=org.osgi.service.jdbc.DataSourceFactory)"]]
>at 
> org.apache.felix.resolver.ResolutionError.toException(ResolutionError.java:42)
>  ~[?:?]
>at org.apache.felix.resolver.ResolverImpl.doResolve(ResolverImpl.ja

Re: Writing commands for karaf shell.

2017-07-22 Thread Tim Ward
Sorry to wind this back a little, but there were a couple of questions from Tom 
which got skipped over. 

I'm afraid that when it comes to shells there isn't a standard. There was an 
RFC created a long time ago, which roughly represented the work that is now 
Gogo. There was a decision at the time that there wasn't a need for a standard, 
this decision could be revisited, particularly if someone wants to drive the 
work through the Alliance.

As for the following question:

>> Originally I thought that Karaf was the "enterprise version of felix". This 
>> doesn't seem to be the case?

Karaf and Felix may both be hosted at Apache, but Karaf is a totally separate 
project from Felix with a very different ethos. Karaf does not implement an 
OSGi framework, or OSGi standards, but builds a server based on OSGi components 
from a variety of places. 

Karaf is flexible, but ultimately opinionated about libraries and dictates a 
number of high level choices. Felix works hard to allow you to use 
implementations from anywhere with the standalone components they produce. 

Karaf is also prepared to invent concepts (e.g. features and kar files) and not 
contribute them back to OSGi, leaving them as proprietary extensions. This even 
happens when OSGi standards do exist (or are nearly final). Karaf also promotes 
non standard (and some non Apache) programming model extensions.

While this does, by some measures, make Karaf a "bad" OSGi citizen, it is also 
one of the reasons why Karaf is so successful, and helps to drive OSGi adoption 
(a very good thing for OSGi). By being opinionated Karaf can be simpler for new 
users, even if it provides a more limited view of what your OSGi options are. 
The Felix framework, on the other hand, lets you make all the decisions, but 
also requires you to make all the decisions!

In summary I would describe Karaf as an Open Source OSGi server runtime, where 
Felix is more like a base operating system.

Tim

Sent from my iPhone

> On 22 Jul 2017, at 06:44, Christian Schneider  wrote:
> 
> That sounds interesting. Can you point us to the code where those commands 
> are implemented and where the completion is defined?
> I know there is the completion support that you can define in the shell init 
> script but I think this is difficult to maintain this way.
> 
> Is it now possible to somehow define the completion for gogo commands per 
> bundle or even by annotations directly on the class?
> 
> Christian
> 
> 2017-07-21 16:57 GMT+02:00 Guillaume Nodet :
>> If you look at Karaf >= 4.1.x, a bunch of commands are not coming from Karaf 
>> anymore, but from Gogo or JLine.  I moved them when working on the gogo / 
>> jline3 integration.  The main point that was blocking imho is that they did 
>> not have completion support.  With the new fully scripted completion system 
>> from gogo-jline, gogo commands can have full completion, so I don't see any 
>> blocking points anymore.  It's just about tracking commands and registering 
>> them in the karaf shell.
>> 
>> 2017-07-21 15:27 GMT+02:00 Christian Schneider :
 On 21.07.2017 12:27, t...@quarendon.net wrote:
 Yes, but what's the actual situation from a standards point of view?
 Is a shell defined by a standard at all? OSGi enroute seems to require a 
 gogo shell and appears to rely on felix gogo shell command framework.
 Is it just that Karaf happens to ship a shell that happens to be based on 
 the felix gogo shell (or perhaps not, but stack traces seem to suggest 
 so), but that basically if I want to implement a shell command I have to 
 implement it differently for each shell type?
 
 That seems a poor situation and leaves me with having to implement one 
 command implementation to be used in the development environment and one 
 that is used in the karaf deployment.
 
 Originally I thought that Karaf was the "enterprise version of felix". 
 This doesn't seem to be the case?
 
 There *could* be a really powerful environment and ecosystem here, if it 
 was all a *little* bit less fragmented :-)
>>> I fully agree that we need to work towards more common approaches. The OSGi 
>>> ecosystem is too small to afford being fragmented like this.
>>> We all have the chance and duty to work on improving this though.
>>> 
>>> Christian
>>> 
>>> -- 
>>> Christian Schneider
>>> http://www.liquid-reality.de
>>> 
>>> Open Source Architect
>>> http://www.talend.com
>>> 
>> 
>> 
>> 
>> -- 
>> 
>> Guillaume Nodet
>> 
> 
> 
> 
> -- 
> -- 
> Christian Schneider
> http://www.liquid-reality.de
> 
> Open Source Architect
> http://www.talend.com


Re: Pax-JDBC 1.1 hikari pool and XA Support

2017-07-10 Thread Tim Ward
Another option would be to look at Apache Aries Transaction control. That 
provides a simple, effective model for connection pooling, resource lifecycle 
management, and transaction enlistment. It will also be the reference 
implementation of the OSGi Transaction Control specification when OSGi R7 goes 
final.

Tim

Sent from my iPhone

> On 10 Jul 2017, at 16:18, Guillaume Nodet  wrote:
> 
> I don't recommand using non XA specific pooling mechanism for JDBC support, 
> unless you don't care about the ability to recover in-flight transactions 
> (which doesn't play well with wanting XA).
> If you're using the geronimo/aries transaction manager, you may want to use 
> the pax-jdbc-pool-aries instead which will support recovery.
> 
> Guillaume
> 
> 2017-07-10 17:03 GMT+02:00 sahlex :
>> Hello.
>> 
>> When using pax-jdbc 1.1.0 (on karaf 4.0.8) to be able to use hikari pool 
>> support everything is working fine until I switch on XA support.
>> In this case the DataSource is not created! When I comment out the XA 
>> support from the datasource factory config file, the DataSource pops up 
>> again.
>> 
>> I have transaction service 2.1.0 deployed (tried with 1.1.1 as well):
>> list | grep Trans
>> 164 | Active | 80 | 2.1.0 | Apache Aries Transaction Blueprint
>> 165 | Active | 80 | 1.3.1 | Apache Aries Transaction Manager
>> 
>> My configuration file:
>> osgi.jdbc.driver.name = mariadb
>> dataSourceName = whatever
>> databaseName = whatever
>> user = xxx
>> password = xxx
>> pool = hikari
>> # xa = true
>> hikari.maximumPoolSize = 200
>> hikari.connectionTimeout = 400
>> url = 
>> jdbc:mariadb:failover://1.2.3.4/bam?characterEncoding=UTF-8&useServerPrepStmts=true
>> 
>> service:list DataSource
>> [javax.sql.DataSource]
>> --
>> databaseName = whatever
>> dataSourceName = whatever
>> felix.fileinstall.filename = file:/opt/
>> hikari.connectionTimeout = 400
>> hikari.maximumPoolSize = 200
>> osgi.jdbc.driver.name = mariadb
>> osgi.jndi.service.name = whatever
>> password = xxx
>> service.bundleid = 126
>> service.factoryPid = org.ops4j.datasource
>> service.id = 391
>> service.pid = org.ops4j.datasource.f349f611-9c2e-48a7-8ac0-3789a8f5dd66
>> service.scope = singleton
>> url = 
>> jdbc:mariadb:failover://172.17.42.50:3309/whatever?characterEncoding=UTF-8&useServerPrepStmts=true
>> user = xxx
>> Provided by :
>> OPS4J Pax JDBC Config (126)
>> 
>> Regards, Alexander
>> 
>> 
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://karaf.922171.n3.nabble.com/Pax-JDBC-1-1-hikari-pool-and-XA-Support-tp4050977.html
>> Sent from the Karaf - User mailing list archive at Nabble.com.
> 
> 
> 
> -- 
> 
> Guillaume Nodet
> 


Re: java.lang.ClassNotFoundException: org.h2.Driver from bundle ..

2017-07-01 Thread Tim Ward
That would be Karaf complaining that there is no H2 bundle installed into the 
runtime. You will need to include it in a feature somewhere.

For reference you can configure a DBCP pool like this, passing in a DataSource: 
https://stackoverflow.com/questions/10807902/configuring-apache-dbcp-poolingdatasource-with-spring

Tim
Sent from my iPhone

> On 2 Jul 2017, at 00:10, smunro  wrote:
> 
> In my pom I have org.h2
> 
> And my pom dependency has:
> 
> 
>
>com.h2database
>h2
>1.3.174
>
> 
> However, I'm getting the following:
> 
> 
> org.osgi.service.resolver.ResolutionException: Unable to resolve root:
> missing requirement [root] osgi.identity; osgi.identity=test-all;
> type=karaf.feature; version="[0.0.17.SNAPSHOT,0.0.17.SNAPSHOT]";
> filter:="(&(osgi.identity=test-all(type=karaf.feature)(version>=0.0.17.SNAPSHOT)(version<=0.0.17.SNAPSHOT))"
> [caused by: Unable to resolve test-all/0.0.17.SNAPSHOT: missing requirement
> [test-all/0.0.17.SNAPSHOT] osgi.identity;
> osgi.identity=org.desolateplanet.authentication-db-impl; type=osgi.bundle;
> version="[0.0.17.SNAPSHOT,0.0.17.SNAPSHOT]"; resolution:=mandatory [caused
> by: Unable to resolve
> org.desolateplanet.authentication-db-impl/0.0.17.SNAPSHOT: missing
> requirement [org.desolateplanet.authentication-db-impl/0.0.17.SNAPSHOT]
> osgi.wiring.package;
> filter:="(&(osgi.wiring.package=org.h2)(version>=1.3.0)(!(version>=2.0.0)))"]]
> 
> 
> 
> --
> View this message in context: 
> http://karaf.922171.n3.nabble.com/java-lang-ClassNotFoundException-org-h2-Driver-from-bundle-tp4050894p4050899.html
> Sent from the Karaf - User mailing list archive at Nabble.com.


Re: java.lang.ClassNotFoundException: org.h2.Driver from bundle ..

2017-07-01 Thread Tim Ward
Hi,

The org.osgi.service.jdbc.DataSourceFactory service is an OSGi standard, not 
part of Karaf (hence the org.osgi package name). You can find the specification 
chapter in the OSGi compendium. If you look at the H2 bundle you'll also see 
that H2 actually implements this standard directly. 

Anyway, your problem occurs because you are passing a String class name to 
DBCP. DBCP is then attempting to dynamically load this class, which fails 
because the DBCP bundle has no visibility of the database implementation class 
(which is as it should be). Effectively the BasicDataSource can't be configured 
safely in OSGi because of this. 

You can assemble a DBCP pool in your blueprint file, using the 
DataSourceFactory service to get hold of a DataSource or Driver, and setting up 
the pool using a set of beans (this is described in the DBCP docs for custom 
pool setup). 

If you insist on not using the standard then you need to embed the whole of 
DBCP into your bundle and add the imports for the database drivers, which is 
even smellier than what was being suggested in my previous email as you will 
also drag in DBCP's dependencies. 

Another standard (technically a draft pending release later this year) that 
would probably help you is OSGi Transaction Control. This makes it a lot easier 
to safely get hold of a database connection and to use it in transactions. 
There's an implementation of this hosted at Apache Aries if you're interested.

Tim

Sent from my iPhone

> On 1 Jul 2017, at 23:17, smunro  wrote:
> 
> Hello Timothy,
> 
> Thanks for the quick reply.
> 
> The issue is that my manager is pushing me to be container agnostic and to
> avoid tying myself to Karaf. Personally, I'd rather use pax-jdbc but I am
> not permitted to do so and I agree about your comment regarding code smells
> for tying to a specific jdbc implementation.
> 
> What I have at the moment is a blueprint file with a dbcp BasicDataSource,
> this file has two configurations. One for sql server and another for h2. I
> had a lot of of trouble getting SQL Server to work using the data source
> strategy, which was enough for my manager to push me to use a direct
> connection for both.
> 
> I've tried both setting the Bundle-Classpath and Import-Package with org.h2
> and both seem to result in an error. I have h2 defined as a pom dependency,
> is there anything else I am missing?
> 
> 
> 
> --
> View this message in context: 
> http://karaf.922171.n3.nabble.com/java-lang-ClassNotFoundException-org-h2-Driver-from-bundle-tp4050894p4050897.html
> Sent from the Karaf - User mailing list archive at Nabble.com.


Re: java.lang.ClassNotFoundException: org.h2.Driver from bundle ..

2017-07-01 Thread Tim Ward
The correct way to obtain instances of JDBC reources is using a 
org.osgi.service.jdbc.DataSourceFactory service. This decouples you from a 
specific JDBC driver and allows you to pick an appropriate implementation at 
runtime. You can inject instances of this service using DS, Blueprint, or get 
hold of it in several other ways.

If you really do want to couple to a specific database then you will need to 
explicitly import the package that the driver comes from in the bundle that 
uses it. Note that this is a code smell, and prevents you from using an 
alternative, even when testing the bundle.

Tim

Sent from my iPhone

> On 1 Jul 2017, at 22:29, smunro  wrote:
> 
> As addendum,
> 
> I am not using any data sources in karaf using pax-jdbc. The reason being I
> require SQL Server support and opted to do a direct connection via a bundle. 
> 
> 
> 
> --
> View this message in context: 
> http://karaf.922171.n3.nabble.com/java-lang-ClassNotFoundException-org-h2-Driver-from-bundle-tp4050894p4050895.html
> Sent from the Karaf - User mailing list archive at Nabble.com.


Re: missing requirement osgi.contract=JavaServlet

2017-06-12 Thread Tim Ward
In answer to your question, yes PAX Web should be providing the contract.

Tim

Sent from my iPhone

> On 12 Jun 2017, at 13:48, t...@quarendon.net wrote:
> 
> OK, I've "solved" this by creating an additional bundle that simply has the 
> required:
> 
> Provide-Capability: osgi.contract;osgi.contract=JavaServlet;version:Vers
> ion="3.1";uses:="javax.servlet,javax.servlet.http,javax.servlet.descrip
> tor,javax.servlet.annotation"
> 
> line in the MANIFEST.
> 
> I say "solved", the karaf assmembly now at least builds. I have yet to 
> determine how successfully it actually runs.
> 
> However, this seems like a gross hack to me. Shouldn't the pax-web http-api 
> bundle provide this capability?
> 
> Thanks.


Re: missing requirement osgi.contract=JavaServlet

2017-06-12 Thread Tim Ward
Hi Achim,

This isn't particularly new, the original recommendation is at least three 
years old and was blogged about in 2014 
(http://blog.osgi.org/2014/09/portable-java-contracts-for-javax.html?m=1). 

The complexity comes from the fact that Java EE uses marketing versions for 
API, not semantic ones. My bundle using servlet 2.5 still needs to work with 
modern 3.x containers. The same is true of JPA, JAX-RS and a host of others.

The OSGi alliance cannot fix this problem at source, so had to come up with an 
alternative.

The bnd "contract" instruction makes it easy to require a contract, from there 
it's just a question of making sure you use a suitably packaged implementation.

Regards,

Tim

Sent from my iPhone

> On 12 Jun 2017, at 13:47, Achim Nierbeck  wrote:
> 
> Ok ... that's new, and when did that happen? 
> Which package is supposed to provide that? 
> And why do we now need another re-package package of the already available 
> servlet api package? 
> 
> That's one of those moments I really can see why people say OSGi makes 
> everything far to complex ... 
> 
> regards, Achim 
> 
> 
> 2017-06-12 14:43 GMT+02:00 Tim Ward :
>> Requiring the JavaServlet contract is a good idea, and recommended by the 
>> OSGi Alliance (https://www.osgi.org/portable-java-contract-definitions/). 
>> You need a bundle which provides the contract in your runtime. I'd suggest 
>> using the repackaged servlet api from Apache Felix.
>> 
>> Tim
>> 
>> Sent from my iPhone
>> 
>>> On 12 Jun 2017, at 13:39, Achim Nierbeck  wrote:
>>> 
>>> taken from your first mail, 
>>> your bundle mybundle seems  to declare an osgi contract on JavaServlet. 
>>> Never seen that kind of dependency before. 
>>> Make sure you have a clean import-package export-package structure in your 
>>> bundle. 
>>> I think that is your root issue. 
>>> 
>>> regarding using pax-web instead of felix-http. Both provide the http 
>>> service according to the spec. 
>>> So you should be safe on that :)
>>> 
>>> regards, Achim
>>> 
>>> 
>>> 2017-06-12 14:10 GMT+02:00 :
>>>> With regard to the "wrap/0.0.0" error, running Maven with -X gives me:
>>>> 
>>>> Caused by: 
>>>> org.apache.karaf.features.internal.service.Deployer$CircularPrerequisiteException:
>>>>  [wrap/0.0.0]
>>>> at 
>>>> org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:266)
>>>> at 
>>>> org.apache.karaf.profile.assembly.Builder.resolve(Builder.java:1429)
>>>> at 
>>>> org.apache.karaf.profile.assembly.Builder.startupStage(Builder.java:1183)
>>>> at 
>>>> org.apache.karaf.profile.assembly.Builder.doGenerateAssembly(Builder.java:659)
>>>> at 
>>>> org.apache.karaf.profile.assembly.Builder.generateAssembly(Builder.java:441)
>>>> at 
>>>> org.apache.karaf.tooling.AssemblyMojo.doExecute(AssemblyMojo.java:506)
>>>> at 
>>>> org.apache.karaf.tooling.AssemblyMojo.execute(AssemblyMojo.java:262)
>>>> ... 22 more
>>>> 
>>>> 
>>>> Suggesting a circular dependency issue somewhere, though quite where, who 
>>>> knows. There are one or two references to "pax-url-wrap" in the -X output, 
>>>> but that's all there is that mentions "wrap" at any point.
>>>> 
>>>> Don't know whether that helps?
>>> 
>>> 
>>> 
>>> -- 
>>> 
>>> Apache Member
>>> Apache Karaf <http://karaf.apache.org/> Committer & PMC
>>> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer & 
>>> Project Lead
>>> blog <http://notizblog.nierbeck.de/>
>>> Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>
>>> 
>>> Software Architect / Project Manager / Scrum Master 
>>> 
> 
> 
> 
> -- 
> 
> Apache Member
> Apache Karaf <http://karaf.apache.org/> Committer & PMC
> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer & 
> Project Lead
> blog <http://notizblog.nierbeck.de/>
> Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>
> 
> Software Architect / Project Manager / Scrum Master 
> 


Re: missing requirement osgi.contract=JavaServlet

2017-06-12 Thread Tim Ward
Requiring the JavaServlet contract is a good idea, and recommended by the OSGi 
Alliance (https://www.osgi.org/portable-java-contract-definitions/). You need a 
bundle which provides the contract in your runtime. I'd suggest using the 
repackaged servlet api from Apache Felix.

Tim

Sent from my iPhone

> On 12 Jun 2017, at 13:39, Achim Nierbeck  wrote:
> 
> taken from your first mail, 
> your bundle mybundle seems  to declare an osgi contract on JavaServlet. 
> Never seen that kind of dependency before. 
> Make sure you have a clean import-package export-package structure in your 
> bundle. 
> I think that is your root issue. 
> 
> regarding using pax-web instead of felix-http. Both provide the http service 
> according to the spec. 
> So you should be safe on that :)
> 
> regards, Achim
> 
> 
> 2017-06-12 14:10 GMT+02:00 :
>> With regard to the "wrap/0.0.0" error, running Maven with -X gives me:
>> 
>> Caused by: 
>> org.apache.karaf.features.internal.service.Deployer$CircularPrerequisiteException:
>>  [wrap/0.0.0]
>> at 
>> org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:266)
>> at 
>> org.apache.karaf.profile.assembly.Builder.resolve(Builder.java:1429)
>> at 
>> org.apache.karaf.profile.assembly.Builder.startupStage(Builder.java:1183)
>> at 
>> org.apache.karaf.profile.assembly.Builder.doGenerateAssembly(Builder.java:659)
>> at 
>> org.apache.karaf.profile.assembly.Builder.generateAssembly(Builder.java:441)
>> at 
>> org.apache.karaf.tooling.AssemblyMojo.doExecute(AssemblyMojo.java:506)
>> at 
>> org.apache.karaf.tooling.AssemblyMojo.execute(AssemblyMojo.java:262)
>> ... 22 more
>> 
>> 
>> Suggesting a circular dependency issue somewhere, though quite where, who 
>> knows. There are one or two references to "pax-url-wrap" in the -X output, 
>> but that's all there is that mentions "wrap" at any point.
>> 
>> Don't know whether that helps?
> 
> 
> 
> -- 
> 
> Apache Member
> Apache Karaf  Committer & PMC
> OPS4J Pax Web  Committer & 
> Project Lead
> blog 
> Co-Author of Apache Karaf Cookbook 
> 
> Software Architect / Project Manager / Scrum Master 
> 


Re: PAX JDBC 1.0.1 pools

2017-02-24 Thread Tim Ward
Hi Scott,

The OSGi Transaction Control service has built in support for connection 
pooling. There's an en route example here:
https://github.com/osgi/osgi.enroute.examples.jdbc

Regards,

Tim

Sent from my iPhone

> On 24 Feb 2017, at 16:12, Leschke, Scott  wrote:
> 
> I’m a bit confused on how to configure the underlying connection pool. I’ll 
> be using the Hikari pool service.pax-jdbc-pool-hikaricp. Could someone point 
> me to the docs or something? The only example I see is for DBCP and all my 
> experiments thus far have failed.
>  
> Thx, Scott


Re: DS components not Active

2017-02-03 Thread Tim Ward
I disagree with the earlier statement that Blueprint and DS cannot coexist in 
the same bundle. I have thought quite hard about this and can think of no 
reason whatsoever why it should not work. 

Tim

Sent from my iPhone

> On 3 Feb 2017, at 15:28, Christian Schneider  wrote:
> 
> I fully agree. I never understood why this decision was made for config 
> admin. In newer versions of the spec it now is possible to share configs but 
> in practice I had lots of problems with it. Still sometimes a common service 
> can help.
> 
> Christian
> 
>> On 03.02.2017 16:13, Alex Soto wrote:
>> No, services should be small and highly cohesive.  Just because they need 
>> the same configuration setting, it does not mean they provide the same 
>> functionality or should be combined.  This CM limitation is actually pushing 
>> for bigger less cohesive services.  I am disappointed having invested in the 
>> CM for configuration, and it will cause me painful project delays now.
>> 
>> Best regards,
>> 
>> Alex soto
>> alex.s...@envieta.com
> 
> -- 
> Christian Schneider
> http://www.liquid-reality.de
> 
> Open Source Architect
> http://www.talend.com
> 


Re: DS components not Active

2017-02-03 Thread Tim Ward
That sounds like a bug in blueprint then, unless there's a filter on the 
blueprint reference which prevents it from picking the reconfigured service it 
should definitely re-wire. 

Tim

Sent from my iPhone

> On 3 Feb 2017, at 13:42, Alex Soto  wrote:
> 
> Yes, I hear you,  I’ve read about it here as well.  It is the same bundle; 
> both the Blueprint context and the component are in the same bundle.
> To summarize: a single bundle with both a DS component and a Blueprint 
> context referencing the same configuration PID.  The Blueprint context 
> depends on the service exposed by the DS component.  Initial deployment is 
> fine.  A change in the configuration after steady state causes the Blueprint 
> context to wait for the service forever.  It does not matter if the 
> component’s immediate attribute is true or false; it fails in both cases.
> 
> Best regards,
> Alex soto
> 
> 
>> On Feb 3, 2017, at 3:00 AM, Timothy Ward  wrote:
>> 
>> One question - how is the configuration managed? If the blueprint context 
>> and DS component are in different bundles and the configuration gets bound 
>> to a single bundle location then you may only be configuring one of the 
>> bundles. Using the same PID across multiple bundles is usually a recipe for 
>> misery and pain…
>> 
>> Regards,
>> 
>> Tim
>> 
>>> On 2 Feb 2017, at 21:34, Alex Soto  wrote:
>>> 
>>> I see how I confused you, I am sorry.   I have been simplifying the 
>>> description of the problem to make it easier to understand.   The reality 
>>> is that yes, both the Blueprint context and the component depend on the 
>>> same PID.  The output I provided earlier does not show it, because this 
>>> component depends on another component which is the one that depends on the 
>>> PID.  So the component is restarted due to its dependency on the other 
>>> component.   In other words, when I update a property, the two components 
>>> restart, as does the blueprint context, but the Blueprint context doest not 
>>> finish initializing waiting forever for the service exposed by the 
>>> component.
>>> 
>>> Best regards,
>>> Alex soto
>>> 
>>> 
>>> 
 On Feb 2, 2017, at 4:23 PM, David Jencks  wrote:
 
 I think the problem is in blueprint somewhere.  I’m a bit confused since 
 you seem to be saying there is only one pid but the DS component uses 
 “org.MyServiceImpl" and the blueprint container uses (IIUC, which I may 
 not) “business”.
 
 david jencks
 
> On Feb 2, 2017, at 1:13 PM, Alex Soto  wrote:
> 
> The Blueprint bundle restarts because it names the configuration PID that 
> the Component depends on:
> i.e., it has something like this:
> 
>  
> Best regards,
> Alex soto
> 
> 
> 
>> On Feb 2, 2017, at 3:49 PM, David Jencks  
>> wrote:
>> 
>> IMO it’s unlikely to be a problem in the DS framework.  Why is a change 
>> in a configuration making a blueprint container restart?  I’d expect the 
>> damping proxies to leave the same blueprint component instance in place.
>> 
>> david jencks
>> 
>>> On Feb 2, 2017, at 12:04 PM, Alex Soto  wrote:
>>> 
>>> Yes, the component did not have the immediate=true, and I have an 
>>> annotated activate method, but I don’t have a deactivated method.  The 
>>> fact that the scr:info shows a name for this method caught my 
>>> attention, since the modified method is instead shown as a dash (-).  
>>> This was just me looking for a pattern of something different/wrong.
>>> 
>>> Anyway, thanks for the clarification.  
>>> 
>>> The real problem I am having is that another Blueprint bundle is 
>>> waiting forever for the service exposed by my component.   Apparently, 
>>> the Blueprint dependency for the service is NOT triggering the 
>>> activation of this component. 
>>> 
>>> Now, this does not occur during initial startup, but only after the 
>>> container has been running, and a change in the component configuration 
>>> causes the component to restart.  I believe there may be a bug here.
>>> 
>>> 
>>> Best regards,
>>> Alex soto
>>> 
>>> 
 On Feb 2, 2017, at 2:50 PM, Christian Schneider 
  wrote:
 
 Hi Alex,
 
 I suppose these components do not have immediate=true and are not used 
 by any other component. This is just the normal lazy loading.
 Without the immediate flag a DS component is only activated if its 
 service is used.
 
 Christian
 
 2017-02-02 20:04 GMT+01:00 Alex Soto :
> Hello,
> 
> I am using Karaf 4.0.8.  
> 
> Some DS components in my application do not show as ACTIVE in the 
> output from the scr:components command, but show a blank state.
> I do no see any difference between the other DS components that are 
> shown as ACTIVE,  an

Re: Cannot inject blueprint exposed service with annotations

2017-02-02 Thread Tim Ward
Yes - the Reference annotation has a number of useful properties that you can 
set to control its behaviour. 

Tim

Sent from my iPhone

> On 2 Feb 2017, at 22:45, Dario Amiri  wrote:
> 
> The javadoc is unclear about where to use it. Is this a setting on the 
> Reference annotation itself?
> 
>> On 02/02/2017 09:46 AM, Timothy Ward wrote:
>>> I did not understand your comment on "Declarative Services with a static 
>>> policy". I'm ignorant of this concept. Is there some documentation I can 
>>> look at to better understand what that means?
>> 
>> The JavaDoc is actually pretty good!
>> https://osgi.org/javadoc/r6/cmpn/org/osgi/service/component/annotations/ReferencePolicy.html#STATIC
>> 
>> 
>>> Finally, I am using the maven-bundle-plugin with the "_dsannotations" 
>>> tag to process the declarative services annotations. Is there a better way?
>> 
>> There are different ways, but not necessarily better. The bnd-maven-plugin 
>> is another choice for generating your OSGi metadata, and it automatically 
>> picks up the annotations. It’s doing the same thing under the covers (even 
>> using the same library), just with some slightly better default 
>> configuration and tooling support. I must admit to being biased though, as I 
>> actually write some of the plugins in that project.
>> 
>> Regards,
>> 
>> Tim
>> 
>>> On 2 Feb 2017, at 16:30, Dario Amiri  wrote:
>>> 
>>> Thank you Timothy.
>>> 
>>> It was the @Reference on the unbind that was creating the problem. I 
>>> don't know why I didn't catch that especially since I have another 
>>> @Reference right next to that one where I did not make the same mistake. 
>>> I guess there's no substitute for a second pair of eyes - makes me wish 
>>> I could go back to pair programming.
>>> 
>>> I did not understand your comment on "Declarative Services with a static 
>>> policy". I'm ignorant of this concept. Is there some documentation I can 
>>> look at to better understand what that means?
>>> 
>>> Finally, I am using the maven-bundle-plugin with the "_dsannotations" 
>>> tag to process the declarative services annotations. Is there a better way?
>>> 
>>> Thanks again,
>>> 
>>> D
>>> 
 On 02/02/2017 08:10 AM, Timothy Ward wrote:
 The next thing to check is, are you using a tool which processes the 
 Declarative Services annotations when building the bundle? Does the bundle 
 have a Service-Component header and a matching XML file? I’m guessing that 
 you probably do as you refer to the reference not being set, but it’s 
 still worth checking!
>>> 
>> 
> 


Re: Karaf-4.0.7/8 - ResolutionException for existing Aries TransactionControl Service.

2016-12-30 Thread Tim Ward
David is right, 

there is a bug in the packaging of the Aries local transaction control service 
implementation. You should just be able to use the XA implementation (which 
also supports local transactions) instead, or you can wait for a fix in the 
snapshots, which will likely appear in the new year.

Tim

Sent from my iPhone

> On 30 Dec 2016, at 17:24, David Jencks  wrote:
> 
> My guess is that the bundle providing the TransactionControl service doesn’t 
> say so by having a Provide-Capability header for it.  For runtime resolution 
> the effectice=active requirements don’t matter but for at last subsystem 
> resolution they do.
> 
> david jencks
> 
>> On Dec 30, 2016, at 6:08 AM, Erwin Hogeweg  wrote:
>> 
>> Hi,
>> 
>> I am having problems getting a feature working with the TransactionControl 
>> Service under Karaf. The error msg suggests that the service is missing or 
>> not active, but I am at a loss of why.
>> 
>> This is the error:
>> [caused by: Unable to resolve 
>> com.my.persistence.repositories/1.0.0.SNAPSHOT_20161227-1454: missing 
>> requirement [com.my.persistence.repositories/1.0.0.SNAPSHOT_20161227-1454] 
>> osgi.service; 
>> filter:="(objectClass=org.osgi.service.transaction.control.TransactionControl)";
>>  effective:=active]]
>> 
>> I have a DSF:
>> [org.osgi.service.jdbc.DataSourceFactory]
>> -
>>  osgi.jdbc.driver.class = com.mysql.jdbc.Driver
>>  osgi.jdbc.driver.name = mysql
>>  service.id = 255
>>  service.bundleid = 162
>>  service.scope = singleton
>> Provided by : 
>>  OPS4J Pax JDBC MySQL Driver Adapter (162)
>> Used by: 
>>  OSGi Transaction Control JPA Resource Provider - Local Transactions (114)
>> 
>> I have a TransactionControl service:
>> [org.osgi.service.transaction.control.TransactionControl]
>> -
>>  service.vendor = Apache Aries
>>  service.description = The Apache Aries Transaction Control Service for 
>> Local Transactions
>>  osgi.local.enabled = true
>>  service.id = 206
>>  service.bundleid = 113
>>  service.scope = singleton
>> Provided by : 
>>  OSGi Transaction Control Service - Local Transactions (113)
>> 
>> TransactionControl bundles appear to be running correctly:
>> OSGi Transaction Control Service - Local Transactions (113) provides:
>> -
>> [org.osgi.service.transaction.control.TransactionControl]
>> 
>> OSGi Transaction Control JPA Resource Provider - Local Transactions (114) 
>> uses:
>> ---
>> [javax.persistence.spi.PersistenceProvider]
>> [org.osgi.service.jpa.EntityManagerFactoryBuilder]
>> [org.osgi.service.jdbc.DataSourceFactory]
>> 
>> OSGi Transaction Control JPA Resource Provider - Local Transactions (114) 
>> provides:
>> ---
>> osgi.local.enabled = true
>> objectClass = 
>> [org.osgi.service.transaction.control.jpa.JPAEntityManagerProviderFactory]
>> service.id = 207
>> service.bundleid = 114
>> service.scope = bundle
>> 
>> service.pid = org.apache.aries.tx.control.jpa.local
>> objectClass = [org.osgi.service.cm.ManagedServiceFactory]
>> service.id = 209
>> service.bundleid = 114
>> service.scope = singleton
>> 
>> service.pid = 
>> org.apache.aries.tx.control.jpa.local.adaeed20-19cf-4dff-9276-0afc20052ecc
>> user = db_user
>> url = jdbc:mysql://localhost:3306/my_db
>> service.factoryPid = org.apache.aries.tx.control.jpa.local
>> osgi.unit.name = my.pu
>> osgi.jdbc.driver.class = com.mysql.jdbc.Driver
>> felix.fileinstall.filename = 
>> file:<…>/apache-karaf-4.0.7/etc/org.apache.aries.tx.control.jpa.local-.cfg
>> databaseName = my_db
>> objectClass = 
>> [org.osgi.service.transaction.control.jpa.JPAEntityManagerProvider]
>> service.id = 256
>> service.bundleid = 114
>> service.scope = singleton
>> 
>> 
>> FWIW, transactionControl resolves just fine in a ‘stand-alone’ equinox 
>> framework:
>> 
>> g! bundle 89
>> com.my.persistence.repositories_1.0.0.SNAPSHOT_20161227-1454 [89]
>> ...
>>   Services in use:
>> ...
>> 
>> {org.osgi.service.transaction.control.TransactionControl}={service.vendor=Apache
>>  Aries, service.description=The Apache Aries Transaction Control Service for 
>> Local Transactions, osgi.local.enabled=true, service.id=70, 
>> service.bundleid=51, service.scope=singleton}
>> 
>> Does anyone have a suggestion as of what I am missing?
>> 
>> 
>> Thanks and a Happy new Year,
>> 
>> Erwin
>> 
> 


Re: Configuration file handling?

2016-11-28 Thread Tim Ward

OK, nearly there, thanks all so far!

So  updates the etc folder, and keeps all my data in one place, 
so that looks like the way to go ...


... give or take the upgrade problem. So what happens if

(a) a new version of a feature is installed which sets a new config item 
which wasn't already in the etc file
(b) a new version of a feature is installed which sets a config item 
which is already in the etc file with a different value


?

If the answers are

(a) the new config item gets written to the etc file
(b) the existing config item doesn't get overwritten

then that solves the problem, but I don't see that much detail in the 
documentation?


Oh, and, I note that deleting the contents of the data directory is 
stated to clear your Karaf down to a known state. But in fact it doesn't 
do this, because etc is outside data, so garbage config files and items 
can get left behind. What do people normally do about this?


On 28/11/2016 16:03, Jean-Baptiste Onofré wrote:
Actually, feature  now populate the etc folder as well (since 
Karaf 4.0.5 AFAIR).


Regards
JB

On 11/28/2016 02:50 PM, Christian Schneider wrote:

You already found the confiFile option for features. This is the most
widely used option.
The alternative is the config option which simply adds the config in
config admin but not in etc.

Both variants do not cover the upgrade case. A simply way is to just
remove the old config to make sure the new default one is written. There
is no built in mechanism to preserve user changes in karaf.

Christian

On 28.11.2016 14:05, Tim Ward wrote:

I'm trying to work out how to handle configuration of a system
deployed to Karaf.

I can see that configuration items put into etc/.cfg end up being
passed to the @Activate method (or whatever), and that you can change
configuration either by editing the .cfg file or from the Karaf
command line (or, I'm guessing, from a JMX console). So that's all
fine, I think.

The parts of the process that I don't yet understand are

(a) getting the .cfg file into etc in the first place
(b) what happens on upgrade.

Let's say the Java source files for the code are in git, and get built
into bundles using Eclipse, and the bundles are installed into Karaf
by some mechanism (I gather that there are some choices, such as
simply dropping the bundle files into the deploy directory). So the
first question is, how do the initial, default, states of the .cfg
files get from git into the etc directory (I'm hoping for a less error
prone answer than checking them out manually and copying them 
manually)?


Then, the life cycle of a configuration file in other contexts is
typically

(1) when the software is first installed, the initial, default state
of the configuration file gets installed at the same time as part of
the same process
(2) the user then edits the configuration file to suit this particular
deployment
(3) a later upgrade to the software comes with a new version of the
configuration file containing some new items, and the upgrade process
must ensure that neither these new items nor the user changes at (2)
get lost.

So how is this managed?

What I've found so far is that one can create a "feature" and use
. But the documentation I've seen doesn't appear to cope
with upgrade - I think it said that new versions of config files would
be silently discarded if an old version was already there? - which
doesn't meet the case (our Operations people get quite cross with
upgrades that do this). At the very very least there needs to be a
clear warning flagged up to the user that they need to do a manual 
merge.


And, what is the "URL" that one puts in a , assuming that
there's a solution to the upgrade issue?

To summarise my questions:

(A) What are the options for getting initial, default config files
from git to etc?
(B) How do people cope with the upgrade issue?
(C) If features and  are part of the solution, what's the
"URL"?

Thanks.









--
Tim Ward



Re: Configuration file handling?

2016-11-28 Thread Tim Ward

On 28/11/2016 15:14, Christian Schneider wrote:

On 28.11.2016 15:14, Tim Ward wrote:

Thanks - replies inline:

On 28/11/2016 13:50, Christian Schneider wrote:
You already found the confiFile option for features. This is the 
most widely used option.


So what is the "URL" in the  element, and how do my 
config files get there from git?
The URL typically is a maven url. See 
https://github.com/apache/karaf-decanter/blob/master/assembly/src/main/feature/feature.xml#L241
The default configs are deployed there using 
build-helper-maven-plugin: 
https://github.com/apache/karaf-decanter/blob/master/appender/kafka/pom.xml#L50-L69


Sorry, I'm not sure I get that.

Where, physically, do the config files go? (I've never used a "Maven 
URL", and we use bndtools not Maven for building.)


I was expecting something where the config files were somewhere inside 
the bundle or feature being deployed.


--
Tim Ward



Re: Configuration file handling?

2016-11-28 Thread Tim Ward

Thanks - replies inline:

On 28/11/2016 13:50, Christian Schneider wrote:
You already found the confiFile option for features. This is the most 
widely used option.


So what is the "URL" in the  element, and how do my config 
files get there from git?


The alternative is the config option which simply adds the config in 
config admin but not in etc.


Both variants do not cover the upgrade case. A simply way is to just 
remove the old config to make sure the new default one is written. 
There is no built in mechanism to preserve user changes in karaf.


"just remove the old config" suggests either a manual process (which I 
can't see going down well with Ops) or some non-trivial upgrade 
scripting (which I fear we're going to need anyway). And "just remove 
the old config" doesn't meet the case, as it'll result in a system that 
was working no longer working.


What do people do in practice? - every non-trivial system with a 
lifetime of more than one version must have come across this upgrade 
issue. Write an upgrade script which detects (how?) that an incoming 
feature has an update for a config file, quarantines one version of the 
file somewhere, and tells the user in big red letters that they've got 
some manual merging to do? Do such scripts, or frameworks for writing 
them, exist, to save wheel-reinventing, as this must be a common question?



Christian

On 28.11.2016 14:05, Tim Ward wrote:
I'm trying to work out how to handle configuration of a system 
deployed to Karaf.


I can see that configuration items put into etc/.cfg end up 
being passed to the @Activate method (or whatever), and that you can 
change configuration either by editing the .cfg file or from the 
Karaf command line (or, I'm guessing, from a JMX console). So that's 
all fine, I think.


The parts of the process that I don't yet understand are

(a) getting the .cfg file into etc in the first place
(b) what happens on upgrade.

Let's say the Java source files for the code are in git, and get 
built into bundles using Eclipse, and the bundles are installed into 
Karaf by some mechanism (I gather that there are some choices, such 
as simply dropping the bundle files into the deploy directory). So 
the first question is, how do the initial, default, states of the 
.cfg files get from git into the etc directory (I'm hoping for a less 
error prone answer than checking them out manually and copying them 
manually)?


Then, the life cycle of a configuration file in other contexts is 
typically


(1) when the software is first installed, the initial, default state 
of the configuration file gets installed at the same time as part of 
the same process
(2) the user then edits the configuration file to suit this 
particular deployment
(3) a later upgrade to the software comes with a new version of the 
configuration file containing some new items, and the upgrade process 
must ensure that neither these new items nor the user changes at (2) 
get lost.


So how is this managed?

What I've found so far is that one can create a "feature" and use 
. But the documentation I've seen doesn't appear to cope 
with upgrade - I think it said that new versions of config files 
would be silently discarded if an old version was already there? - 
which doesn't meet the case (our Operations people get quite cross 
with upgrades that do this). At the very very least there needs to be 
a clear warning flagged up to the user that they need to do a manual 
merge.


And, what is the "URL" that one puts in a , assuming that 
there's a solution to the upgrade issue?


To summarise my questions:

(A) What are the options for getting initial, default config files 
from git to etc?

(B) How do people cope with the upgrade issue?
(C) If features and  are part of the solution, what's the 
"URL"?


Thanks.







--
Tim Ward



Configuration file handling?

2016-11-28 Thread Tim Ward
I'm trying to work out how to handle configuration of a system deployed 
to Karaf.


I can see that configuration items put into etc/.cfg end up being 
passed to the @Activate method (or whatever), and that you can change 
configuration either by editing the .cfg file or from the Karaf command 
line (or, I'm guessing, from a JMX console). So that's all fine, I think.


The parts of the process that I don't yet understand are

(a) getting the .cfg file into etc in the first place
(b) what happens on upgrade.

Let's say the Java source files for the code are in git, and get built 
into bundles using Eclipse, and the bundles are installed into Karaf by 
some mechanism (I gather that there are some choices, such as simply 
dropping the bundle files into the deploy directory). So the first 
question is, how do the initial, default, states of the .cfg files get 
from git into the etc directory (I'm hoping for a less error prone 
answer than checking them out manually and copying them manually)?


Then, the life cycle of a configuration file in other contexts is typically

(1) when the software is first installed, the initial, default state of 
the configuration file gets installed at the same time as part of the 
same process
(2) the user then edits the configuration file to suit this particular 
deployment
(3) a later upgrade to the software comes with a new version of the 
configuration file containing some new items, and the upgrade process 
must ensure that neither these new items nor the user changes at (2) get 
lost.


So how is this managed?

What I've found so far is that one can create a "feature" and use 
. But the documentation I've seen doesn't appear to cope 
with upgrade - I think it said that new versions of config files would 
be silently discarded if an old version was already there? - which 
doesn't meet the case (our Operations people get quite cross with 
upgrades that do this). At the very very least there needs to be a clear 
warning flagged up to the user that they need to do a manual merge.


And, what is the "URL" that one puts in a , assuming that 
there's a solution to the upgrade issue?


To summarise my questions:

(A) What are the options for getting initial, default config files from 
git to etc?

(B) How do people cope with the upgrade issue?
(C) If features and  are part of the solution, what's the "URL"?

Thanks.

--
Tim Ward



Re: Beginner's question - Jetty configuration

2016-11-18 Thread Tim Ward
ive   |  80 | 1.0.1  | Apache Felix Log Service
873 | Active   |  80 | 1.0.0.201611181055 | 
com.telensa.apps.planet.pc.provider
874 | Active   |  80 | 1.0.0.201611181057 | 
com.telensa.apps.planet.ws.application

876 | Active   |  80 | 3.2.0  | Apache Felix Http Jetty
877 | Active   |  80 | 1.1.2  | Apache Felix Servlet API
878 | Active   |  80 | 2.0.2  | Apache Felix Declarative 
Services

879 | Active   |  80 | 9.3.8.v20160314| Jetty :: Utilities
880 | Active   |  80 | 9.3.8.v20160314| Jetty :: Utilities :: 
Ajax(JSON)
881 | Active   |  80 | 2.0.0.201610141744 | 
osgi.enroute.executor.simple.provider
882 | Active   |  80 | 2.0.0.201610141744 | 
osgi.enroute.logger.simple.provider
883 | Active   |  80 | 2.0.0.201610141745 | 
osgi.enroute.web.simple.provider

884 | Active   |  80 | 1.3.100.v20150410-1453 | Coordinator
885 | Active   |  80 | 2.0.0.201610141744 | 
osgi.enroute.configurer.simple.provider
887 | Active   |  80 | 1.5.100.v20140428-1446 | Supplemental Equinox 
Functionality

888 | Active   |  80 | 1.4.8  | Apache Felix EventAdmin

The two with com.telensa.* are my code.

On 18/11/2016 15:58, Tim Ward wrote:

On 18/11/2016 15:49, Achim Nierbeck wrote:
In combination with the thread on the bnd-tools or osgi-dev I've been 
under the impression that you already tried to

tweak on certain configurational aspects.


Yes, I have been trying for some time to get at various aspects of 
Jetty configuration, and on making no progress at osgi-dev I was 
finally told "oh, if you're using Karaf that's out of scope for this 
list, try the Karaf list".


Therefore I tried to suggest to start with a vanilla instance so we 
can proceed from there.


Sorry, I still don't know what you mean by "vanilla instance".

I downloaded and installed Karaf.

I wrote a couple of servlets using bndtools.

I am now trying to debug them using the instructions in 
http://enroute.osgi.org/appnotes/bndtools-and-karaf.html.


If I leave out any of the above I've no longer got anything I can run 
to test whether or not I've managed to switch on Jetty request logging.


For instance this time it's your first statement about which bundles 
you actually did install yourself. As I'm not capable of
reading mind I have no clue what so ever you have been trying before 
and which bundles have been installed.


As per http://enroute.osgi.org/appnotes/bndtools-and-karaf.html plus 
the stuff I've written myself.



regards, Achim


2016-11-18 16:43 GMT+01:00 Tim Ward <mailto:t...@telensa.com>>:


On 18/11/2016 15:41, Achim Nierbeck wrote:

I'm not sure which thread I just responded before.
But best to start with a fresh Vanilla Karaf first.


What do you mean by that? - as far as I know I have downloaded
and installed Karaf, then installed a couple of tiny bundles of
my own so that I've got a servlet to run. How could it be much
more "vanilla" than that?


I fear with all those tryings of you to somehow configure the
server, it's not possible to
help via mailinglist ...

regards, Achim

-- 


Tim Ward




--

Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> 
Committer & Project Lead

blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master




--
Tim Ward



--
Tim Ward



Re: Beginner's question - Jetty configuration

2016-11-18 Thread Tim Ward

On 18/11/2016 15:49, Achim Nierbeck wrote:
In combination with the thread on the bnd-tools or osgi-dev I've been 
under the impression that you already tried to

tweak on certain configurational aspects.


Yes, I have been trying for some time to get at various aspects of Jetty 
configuration, and on making no progress at osgi-dev I was finally told 
"oh, if you're using Karaf that's out of scope for this list, try the 
Karaf list".


Therefore I tried to suggest to start with a vanilla instance so we 
can proceed from there.


Sorry, I still don't know what you mean by "vanilla instance".

I downloaded and installed Karaf.

I wrote a couple of servlets using bndtools.

I am now trying to debug them using the instructions in 
http://enroute.osgi.org/appnotes/bndtools-and-karaf.html.


If I leave out any of the above I've no longer got anything I can run to 
test whether or not I've managed to switch on Jetty request logging.


For instance this time it's your first statement about which bundles 
you actually did install yourself. As I'm not capable of
reading mind I have no clue what so ever you have been trying before 
and which bundles have been installed.


As per http://enroute.osgi.org/appnotes/bndtools-and-karaf.html plus the 
stuff I've written myself.



regards, Achim


2016-11-18 16:43 GMT+01:00 Tim Ward <mailto:t...@telensa.com>>:


On 18/11/2016 15:41, Achim Nierbeck wrote:

I'm not sure which thread I just responded before.
But best to start with a fresh Vanilla Karaf first.


What do you mean by that? - as far as I know I have downloaded and
installed Karaf, then installed a couple of tiny bundles of my own
so that I've got a servlet to run. How could it be much more
"vanilla" than that?


I fear with all those tryings of you to somehow configure the
server, it's not possible to
help via mailinglist ...

regards, Achim

-- 


Tim Ward




--

Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> 
Committer & Project Lead

blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master




--
Tim Ward



Re: Beginner's question - Jetty configuration

2016-11-18 Thread Tim Ward

On 18/11/2016 15:41, Achim Nierbeck wrote:

I'm not sure which thread I just responded before.
But best to start with a fresh Vanilla Karaf first.


What do you mean by that? - as far as I know I have downloaded and 
installed Karaf, then installed a couple of tiny bundles of my own so 
that I've got a servlet to run. How could it be much more "vanilla" than 
that?


I fear with all those tryings of you to somehow configure the server, 
it's not possible to

help via mailinglist ...

regards, Achim

--

Tim Ward



Re: Beginner's question - Jetty configuration

2016-11-18 Thread Tim Ward
Sorry, I don't understand that. What should I actually *do* to "start 
with a clean state" - what changes should I make to which files?


On 18/11/2016 15:33, Achim Nierbeck wrote:

One more thing ...

Pax-Web already tries to run with the best default values, therefore 
it might be good if you start with a

"clean" state and start customizing from there.

regards, Achim

2016-11-18 16:31 GMT+01:00 Achim Nierbeck <mailto:bcanh...@googlemail.com>>:


hmm ...
as it's windows and it's always a hard time to write to files on
windows ...
could you experiment with the directory a bit.

it could also be

c:\\karaf

sorry it's been quite a long time since the last time I used windows.

OTH you might just leave it alone.

One way would be to start slow with only setting the

org.ops4j.pax.web.log.ncsa.enabled   = true

in the configuration.
The log file should be appended to $KARAF_HOME/logs if nothing
else is configured.

Usually you also find a log message in the logs, telling where it
tries to log to:

NCSARequestlogging is using the following directory:


    regards, Achim


2016-11-18 16:25 GMT+01:00 Tim Ward mailto:t...@telensa.com>>:

Ah, thank you.

(1) That wasn't clear from any documentation I found.

(2) I would have hoped to get an error message in the log if
I'd coded it wrongly?

(3) And it still doesn't work: I now have

org.ops4j.pax.web.log.ncsa.enabled   = true
org.ops4j.pax.web.log.ncsa.format= _mm_dd.request.log
org.ops4j.pax.web.log.ncsa.directory = c:/karaf/access/

but still no log file being created in c:\karaf\access.


On 18/11/2016 15:14, Achim Nierbeck wrote:

Hi Tim,

the format is wrong.
You need to set the format, but not the file to write to.
If you want to write to another directory you need to set the
following

org.ops4j.pax.web.log.ncsa.directory=c:/karaf/access/


    regards, Achim





2016-11-18 16:00 GMT+01:00 Tim Ward mailto:t...@telensa.com>>:

Yes, I've tried various versions of those things, and
they don't work for me.

I've just tried again, and it didn't work again.

(1) I put

org.ops4j.pax.web.log.ncsa.enabled = true
org.ops4j.pax.web.log.ncsa.format =
c:\\karaf\\access\\_mm_dd.request.log

into my org.ops4j.pax.web.cfg.

(2) Something appears to have noticed that this file has
changed, as witness

2016-11-18 14:55:28,880 | DEBUG | karaf\bin\..\etc |
configadmin | 3 - org.apache.felix.configadmin - 1.8.8 |
getProperties()
2016-11-18 14:55:28,881 | INFO | karaf\bin\..\etc |
fileinstall | 4 - org.apache.felix.fileinstall - 3.5.4 |
Updating configuration from org.ops4j.pax.web.cfg
2016-11-18 14:55:28,889 | DEBUG | g.ops4j.pax.web) |
configadmin | 3 - org.apache.felix.configadmin - 1.8.8 |
getProperties()

(3) I made sure the directory c:\karaf\access existed,
just in case the logging code doesn't create its own
directories.

(4) I made a request of the web server, which returned a
response to the browser. Checking the DEBUG level
messages in the Karaf log confirms that it did handle the
request.

(5) No log file appeared in c:\karaf\access.


On 18/11/2016 14:51, Achim Nierbeck wrote:

Hi Tim,

in [1], you'll find the current configurations available.
a configuration.json will not be used by pax-web. You
have to use the org.ops4j.pax.web.cfg as it's used to feed
the ConfigurationAdmin service. Those properties are
then propagated to the corresponding OSGi service.
Regarding NCSA logger, yes it's possible, just configure
it appropriately. We have a test for it, which is
disabled right now
as we have some "file" race-conditions on it. [2]
A full list of possible configurations can also be found
here [3]

regards, Achim

[1] -

http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html#basic-configuration

<http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html#basic-configuration>
[2] -

https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-itest/pax-web-itest-container/pax-web-itest-container-jetty/src/test/java/org/ops4j/pax/web/itest/jetty/HttpServiceIntegrationTest.java#L405-L437

<https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-itest/pax-

Re: Beginner's question - Jetty configuration

2016-11-18 Thread Tim Ward

I tried with only

org.ops4j.pax.web.log.ncsa.
enabled = true

and no log file appeared anywhere I could find.

On 18/11/2016 15:31, Achim Nierbeck wrote:

hmm ...
as it's windows and it's always a hard time to write to files on 
windows ...

could you experiment with the directory a bit.

it could also be

c:\\karaf

sorry it's been quite a long time since the last time I used windows.

OTH you might just leave it alone.

One way would be to start slow with only setting the

org.ops4j.pax.web.log.ncsa.enabled = true

in the configuration.
The log file should be appended to $KARAF_HOME/logs if nothing else is 
configured.


Usually you also find a log message in the logs, telling where it 
tries to log to:


NCSARequestlogging is using the following directory:


regards, Achim


2016-11-18 16:25 GMT+01:00 Tim Ward <mailto:t...@telensa.com>>:


Ah, thank you.

(1) That wasn't clear from any documentation I found.

(2) I would have hoped to get an error message in the log if I'd
coded it wrongly?

(3) And it still doesn't work: I now have

org.ops4j.pax.web.log.ncsa.enabled   = true
org.ops4j.pax.web.log.ncsa.format= _mm_dd.request.log
org.ops4j.pax.web.log.ncsa.directory = c:/karaf/access/

but still no log file being created in c:\karaf\access.


On 18/11/2016 15:14, Achim Nierbeck wrote:

Hi Tim,

the format is wrong.
You need to set the format, but not the file to write to.
If you want to write to another directory you need to set the
following

org.ops4j.pax.web.log.ncsa.directory=c:/karaf/access/


regards, Achim





2016-11-18 16:00 GMT+01:00 Tim Ward mailto:t...@telensa.com>>:

Yes, I've tried various versions of those things, and they
don't work for me.

I've just tried again, and it didn't work again.

(1) I put

org.ops4j.pax.web.log.ncsa.enabled = true
org.ops4j.pax.web.log.ncsa.format  =
c:\\karaf\\access\\_mm_dd.request.log

into my org.ops4j.pax.web.cfg.

(2) Something appears to have noticed that this file has
changed, as witness

2016-11-18 14:55:28,880 | DEBUG | karaf\bin\..\etc |
configadmin| 3 - org.apache.felix.configadmin - 1.8.8 |
getProperties()
2016-11-18 14:55:28,881 | INFO  | karaf\bin\..\etc |
fileinstall| 4 - org.apache.felix.fileinstall - 3.5.4 |
Updating configuration from org.ops4j.pax.web.cfg
2016-11-18 14:55:28,889 | DEBUG | g.ops4j.pax.web) |
configadmin| 3 - org.apache.felix.configadmin - 1.8.8 |
getProperties()

(3) I made sure the directory c:\karaf\access existed, just
in case the logging code doesn't create its own directories.

(4) I made a request of the web server, which returned a
response to the browser. Checking the DEBUG level messages in
the Karaf log confirms that it did handle the request.

(5) No log file appeared in c:\karaf\access.


On 18/11/2016 14:51, Achim Nierbeck wrote:

Hi Tim,

in [1], you'll find the current configurations available.
a configuration.json will not be used by pax-web. You have
to use the org.ops4j.pax.web.cfg as it's used to feed
the ConfigurationAdmin service. Those properties are then
propagated to the corresponding OSGi service.
Regarding NCSA logger, yes it's possible, just configure it
appropriately. We have a test for it, which is disabled
right now
as we have some "file" race-conditions on it. [2]
A full list of possible configurations can also be found
here [3]

regards, Achim

[1] -

http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html#basic-configuration

<http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html#basic-configuration>
[2] -

https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-itest/pax-web-itest-container/pax-web-itest-container-jetty/src/test/java/org/ops4j/pax/web/itest/jetty/HttpServiceIntegrationTest.java#L405-L437

<https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-itest/pax-web-itest-container/pax-web-itest-container-jetty/src/test/java/org/ops4j/pax/web/itest/jetty/HttpServiceIntegrationTest.java#L405-L437>
[3] -

https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-runtime/src/main/resources/OSGI-INF/metatype/metatype.xml

<https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-runtime/src/main/resources/OSGI-INF/metatype/metatype.xml>


2016-11-18 15:43 GMT+01:00 Tim Ward mailto:t...@telensa.com>>:

On 18/11/2016 14:28, Achim Nierbeck wrote:

Oh and one more thing, which might be different.
Per default, jetty 

Re: Beginner's question - Jetty configuration

2016-11-18 Thread Tim Ward

Ah, thank you.

(1) That wasn't clear from any documentation I found.

(2) I would have hoped to get an error message in the log if I'd coded 
it wrongly?


(3) And it still doesn't work: I now have

org.ops4j.pax.web.log.ncsa.enabled   = true
org.ops4j.pax.web.log.ncsa.format= _mm_dd.request.log
org.ops4j.pax.web.log.ncsa.directory = c:/karaf/access/

but still no log file being created in c:\karaf\access.

On 18/11/2016 15:14, Achim Nierbeck wrote:

Hi Tim,

the format is wrong.
You need to set the format, but not the file to write to.
If you want to write to another directory you need to set the following

org.ops4j.pax.web.log.ncsa.directory=c:/karaf/access/


regards, Achim





2016-11-18 16:00 GMT+01:00 Tim Ward <mailto:t...@telensa.com>>:


Yes, I've tried various versions of those things, and they don't
work for me.

I've just tried again, and it didn't work again.

(1) I put

org.ops4j.pax.web.log.ncsa.enabled = true
org.ops4j.pax.web.log.ncsa.format  =
c:\\karaf\\access\\_mm_dd.request.log

into my org.ops4j.pax.web.cfg.

(2) Something appears to have noticed that this file has changed,
as witness

2016-11-18 14:55:28,880 | DEBUG | karaf\bin\..\etc | configadmin
   | 3 - org.apache.felix.configadmin - 1.8.8 | getProperties()
2016-11-18 14:55:28,881 | INFO  | karaf\bin\..\etc | fileinstall
   | 4 - org.apache.felix.fileinstall - 3.5.4 | Updating
configuration from org.ops4j.pax.web.cfg
2016-11-18 14:55:28,889 | DEBUG | g.ops4j.pax.web) | configadmin
   | 3 - org.apache.felix.configadmin - 1.8.8 | getProperties()

(3) I made sure the directory c:\karaf\access existed, just in
case the logging code doesn't create its own directories.

(4) I made a request of the web server, which returned a response
to the browser. Checking the DEBUG level messages in the Karaf log
confirms that it did handle the request.

(5) No log file appeared in c:\karaf\access.


On 18/11/2016 14:51, Achim Nierbeck wrote:

Hi Tim,

in [1], you'll find the current configurations available.
a configuration.json will not be used by pax-web. You have to use
the org.ops4j.pax.web.cfg as it's used to feed
the ConfigurationAdmin service. Those properties are then
propagated to the corresponding OSGi service.
Regarding NCSA logger, yes it's possible, just configure it
appropriately. We have a test for it, which is disabled right now
as we have some "file" race-conditions on it. [2]
A full list of possible configurations can also be found here [3]

regards, Achim

[1] -
http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html#basic-configuration

<http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html#basic-configuration>
[2] -

https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-itest/pax-web-itest-container/pax-web-itest-container-jetty/src/test/java/org/ops4j/pax/web/itest/jetty/HttpServiceIntegrationTest.java#L405-L437

<https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-itest/pax-web-itest-container/pax-web-itest-container-jetty/src/test/java/org/ops4j/pax/web/itest/jetty/HttpServiceIntegrationTest.java#L405-L437>
[3] -

https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-runtime/src/main/resources/OSGI-INF/metatype/metatype.xml

<https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-runtime/src/main/resources/OSGI-INF/metatype/metatype.xml>


2016-11-18 15:43 GMT+01:00 Tim Ward mailto:t...@telensa.com>>:

On 18/11/2016 14:28, Achim Nierbeck wrote:

Oh and one more thing, which might be different.
Per default, jetty doesn't listen on port 8181 unless there
is at least one application capable of listening to it.
It's been a feature request in the past.


I'm sorry, I don't understand that. I have deliberately set
it to 8181 using configuration.json, and it works - my
servlets respond on 8181, before I did this the default was
8080.



regards, Achim


2016-11-18 15:27 GMT+01:00 Achim Nierbeck
mailto:bcanh...@googlemail.com>>:

Hi Tim,

as JB already said, that's part of the configuration.
For more details on how to use Pax-Web can be found here
[1].
Also keep in mind, as Pax-Web is a HttpService it's
configuration should first be configured by the
HttpService configuration,
found in the org.ops4j.pax.web config file, like port etc.
Only for enhanced configurations you should use jetty.xml.
Another point here, the jetty.xml uses some slight
different configuration syntax, as you configure an
already startet
Jetty instead

Re: Beginner's question - Jetty configuration

2016-11-18 Thread Tim Ward
Yes, I've tried various versions of those things, and they don't work 
for me.


I've just tried again, and it didn't work again.

(1) I put

org.ops4j.pax.web.log.ncsa.enabled = true
org.ops4j.pax.web.log.ncsa.format  = 
c:\\karaf\\access\\_mm_dd.request.log


into my org.ops4j.pax.web.cfg.

(2) Something appears to have noticed that this file has changed, as witness

2016-11-18 14:55:28,880 | DEBUG | karaf\bin\..\etc | 
configadmin  | 3 - org.apache.felix.configadmin - 
1.8.8 | getProperties()
2016-11-18 14:55:28,881 | INFO  | karaf\bin\..\etc | 
fileinstall  | 4 - org.apache.felix.fileinstall - 
3.5.4 | Updating configuration from org.ops4j.pax.web.cfg
2016-11-18 14:55:28,889 | DEBUG | g.ops4j.pax.web) | 
configadmin  | 3 - org.apache.felix.configadmin - 
1.8.8 | getProperties()


(3) I made sure the directory c:\karaf\access existed, just in case the 
logging code doesn't create its own directories.


(4) I made a request of the web server, which returned a response to the 
browser. Checking the DEBUG level messages in the Karaf log confirms 
that it did handle the request.


(5) No log file appeared in c:\karaf\access.

On 18/11/2016 14:51, Achim Nierbeck wrote:

Hi Tim,

in [1], you'll find the current configurations available.
a configuration.json will not be used by pax-web. You have to use the 
org.ops4j.pax.web.cfg as it's used to feed
the ConfigurationAdmin service. Those properties are then propagated 
to the corresponding OSGi service.
Regarding NCSA logger, yes it's possible, just configure it 
appropriately. We have a test for it, which is disabled right now

as we have some "file" race-conditions on it. [2]
A full list of possible configurations can also be found here [3]

regards, Achim

[1] - 
http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html#basic-configuration
[2] - 
https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-itest/pax-web-itest-container/pax-web-itest-container-jetty/src/test/java/org/ops4j/pax/web/itest/jetty/HttpServiceIntegrationTest.java#L405-L437
[3] - 
https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-runtime/src/main/resources/OSGI-INF/metatype/metatype.xml



2016-11-18 15:43 GMT+01:00 Tim Ward <mailto:t...@telensa.com>>:


On 18/11/2016 14:28, Achim Nierbeck wrote:

Oh and one more thing, which might be different.
Per default, jetty doesn't listen on port 8181 unless there is at
least one application capable of listening to it.
It's been a feature request in the past.


I'm sorry, I don't understand that. I have deliberately set it to
8181 using configuration.json, and it works - my servlets respond
on 8181, before I did this the default was 8080.



regards, Achim


2016-11-18 15:27 GMT+01:00 Achim Nierbeck
mailto:bcanh...@googlemail.com>>:

Hi Tim,

as JB already said, that's part of the configuration.
For more details on how to use Pax-Web can be found here [1].
Also keep in mind, as Pax-Web is a HttpService it's
configuration should first be configured by the HttpService
configuration,
found in the org.ops4j.pax.web config file, like port etc.
Only for enhanced configurations you should use jetty.xml.
Another point here, the jetty.xml uses some slight different
configuration syntax, as you configure an already startet
Jetty instead of configuring a fresh Jetty.
For example do

or




to adapt the configuration.
A complete jetty.xml can be found here [2].

regards, Achim

[1] - http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html
<http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html>
[2]  -

https://github.com/ops4j/org.ops4j.pax.web/blob/master/samples/jetty-config-fragment/src/main/resources/jetty.xml

<https://github.com/ops4j/org.ops4j.pax.web/blob/master/samples/jetty-config-fragment/src/main/resources/jetty.xml>


2016-11-18 15:16 GMT+01:00 Jean-Baptiste Onofré
mailto:j...@nanthrax.net>>:

Hi Tim,

when you install the jetty feature, you can override the
default configuration using etc/org.ops4j.pax.web.cfg.

This cfg file can refer to a jetty.xml using:

org.ops4j.pax.web.config.file=${karaf.base}/etc/jetty.xml

Then the etc/jetty.xml is a jetty file.

Regards
JB


On 11/18/2016 03:11 PM, Tim Ward wrote:

Very simple, I hope, but days of research haven't
found an answer that
works yet.

How do change the configuration of Jetty in Karaf? As
the simplest
possible initial beginner

Re: Beginner's question - Jetty configuration

2016-11-18 Thread Tim Ward

On 18/11/2016 14:28, Achim Nierbeck wrote:

Oh and one more thing, which might be different.
Per default, jetty doesn't listen on port 8181 unless there is at 
least one application capable of listening to it.

It's been a feature request in the past.


I'm sorry, I don't understand that. I have deliberately set it to 8181 
using configuration.json, and it works - my servlets respond on 8181, 
before I did this the default was 8080.



regards, Achim


2016-11-18 15:27 GMT+01:00 Achim Nierbeck <mailto:bcanh...@googlemail.com>>:


Hi Tim,

as JB already said, that's part of the configuration.
For more details on how to use Pax-Web can be found here [1].
Also keep in mind, as Pax-Web is a HttpService it's configuration
should first be configured by the HttpService configuration,
found in the org.ops4j.pax.web config file, like port etc.
Only for enhanced configurations you should use jetty.xml.
Another point here, the jetty.xml uses some slight different
configuration syntax, as you configure an already startet
Jetty instead of configuring a fresh Jetty.
For example do

or




to adapt the configuration.
A complete jetty.xml can be found here [2].

regards, Achim

[1] - http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html
<http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html>
[2]  -

https://github.com/ops4j/org.ops4j.pax.web/blob/master/samples/jetty-config-fragment/src/main/resources/jetty.xml

<https://github.com/ops4j/org.ops4j.pax.web/blob/master/samples/jetty-config-fragment/src/main/resources/jetty.xml>


2016-11-18 15:16 GMT+01:00 Jean-Baptiste Onofré mailto:j...@nanthrax.net>>:

Hi Tim,

when you install the jetty feature, you can override the
default configuration using etc/org.ops4j.pax.web.cfg.

This cfg file can refer to a jetty.xml using:

org.ops4j.pax.web.config.file=${karaf.base}/etc/jetty.xml

Then the etc/jetty.xml is a jetty file.

    Regards
JB


On 11/18/2016 03:11 PM, Tim Ward wrote:

Very simple, I hope, but days of research haven't found an
answer that
works yet.

How do change the configuration of Jetty in Karaf? As the
simplest
possible initial beginner's question, how do I turn on
request logging?

The osgi-dev mailing list referred me here.

(I can actually see what it's doing with requests by
setting the log
level to DEBUG in org.ops4j.pax.logging.cfg and then
looking in
data\log\karaf.log, but given the volume and format of
output that's not
a practical solution.

I've tried putting stuff like
org.ops4j.pax.web.log.ncsa.format=_mm_dd.request.log in
org.ops4j.paw.web.cfg but that doesn't seen to do anything.

I've tried creating a gibberish jetty.xml, pointed to by
org.ops4j.pax.web.config.file in org.ops4j.paw.web.cfg, in
the hope of
getting some error messages about the gibberish, showing
that at least
something was reading the jetty.xml, but that didn't work.
It didn't
work doing the same via configuration.json either.

I haven't really found any actual *documentation* of any
of the above,
just snippets of example code, so all my attempts were
probably wrong
anyway.)

--
Tim Ward


-- 
Jean-Baptiste Onofré

jbono...@apache.org <mailto:jbono...@apache.org>
http://blog.nanthrax.net
Talend - http://www.talend.com




-- 


Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/
<http://wiki.ops4j.org/display/paxweb/Pax+Web/>> Committer &
Project Lead
blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master




--

Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> 
Committer & Project Lead

blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master




--
Tim Ward



Re: Beginner's question - Jetty configuration

2016-11-18 Thread Tim Ward

On 18/11/2016 14:27, Achim Nierbeck wrote:

Hi Tim,

as JB already said, that's part of the configuration.
For more details on how to use Pax-Web can be found here [1].


I tried that, setting some of the |org.ops4j.pax.web.log.ncsa.*| 
properties, it didn't work. *Exactly* what properties do I have to set 
to turn on request logging?


Also keep in mind, as Pax-Web is a HttpService it's configuration 
should first be configured by the HttpService configuration,

found in the org.ops4j.pax.web config file, like port etc.


Yes, I've managed to set the port, via configuration\configuration.json.

I don't mind where or how I set configuration, I'm quite prepared to 
take advice on doing it the "best" or "correct" way, but at present I 
don't have *any* way at all that actually works, which is why I'm 
clutching at all the straws I can find on the web.



Only for enhanced configurations you should use jetty.xml.
Another point here, the jetty.xml uses some slight different 
configuration syntax, as you configure an already startet

Jetty instead of configuring a fresh Jetty.
For example do

or




to adapt the configuration.
A complete jetty.xml can be found here [2].


Yes, I found that, but what I haven't found is an example telling me how 
to turn on request logging.


And I really did think that putting deliberate errors into the jetty.xml 
would cause errors to be reported, and that, therefore, the lack of any 
errors being reported indicated that the jetty.xml wasn't being read by 
anything. which would be consistent with the ncsa properties not working 
either if nothing were reading the entire org.ops4j.pax.web.cfg file.



regards, Achim

[1] - http://ops4j.github.io/pax/web/SNAPSHOT/User-Guide.html
[2]  - 
https://github.com/ops4j/org.ops4j.pax.web/blob/master/samples/jetty-config-fragment/src/main/resources/jetty.xml



2016-11-18 15:16 GMT+01:00 Jean-Baptiste Onofré <mailto:j...@nanthrax.net>>:


Hi Tim,

when you install the jetty feature, you can override the default
configuration using etc/org.ops4j.pax.web.cfg.

This cfg file can refer to a jetty.xml using:

org.ops4j.pax.web.config.file=${karaf.base}/etc/jetty.xml

Then the etc/jetty.xml is a jetty file.

Regards
JB


On 11/18/2016 03:11 PM, Tim Ward wrote:

Very simple, I hope, but days of research haven't found an
answer that
works yet.

How do change the configuration of Jetty in Karaf? As the simplest
possible initial beginner's question, how do I turn on request
logging?

The osgi-dev mailing list referred me here.

(I can actually see what it's doing with requests by setting
the log
level to DEBUG in org.ops4j.pax.logging.cfg and then looking in
data\log\karaf.log, but given the volume and format of output
that's not
a practical solution.

I've tried putting stuff like
org.ops4j.pax.web.log.ncsa.format=_mm_dd.request.log in
org.ops4j.paw.web.cfg but that doesn't seen to do anything.

I've tried creating a gibberish jetty.xml, pointed to by
org.ops4j.pax.web.config.file in org.ops4j.paw.web.cfg, in the
hope of
getting some error messages about the gibberish, showing that
at least
something was reading the jetty.xml, but that didn't work. It
didn't
work doing the same via configuration.json either.

I haven't really found any actual *documentation* of any of
the above,
    just snippets of example code, so all my attempts were
probably wrong
anyway.)

--
Tim Ward


-- 
Jean-Baptiste Onofré

jbono...@apache.org <mailto:jbono...@apache.org>
http://blog.nanthrax.net
Talend - http://www.talend.com




--

Apache Member
Apache Karaf <http://karaf.apache.org/> Committer & PMC
OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> 
Committer & Project Lead

blog <http://notizblog.nierbeck.de/>
Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>

Software Architect / Project Manager / Scrum Master




--
Tim Ward



Re: Beginner's question - Jetty configuration

2016-11-18 Thread Tim Ward

Thanks, but as I said, that's one of the things I tried:

"I've tried creating a gibberish jetty.xml, pointed to by 
org.ops4j.pax.web.config.file in org.ops4j.paw.web.cfg, in the hope of 
getting some error messages about the gibberish, showing that at least 
something was reading the jetty.xml, but that didn't work."


(Typo in the above, of course I meant org.ops4j.pax.web.cfg not 
org.ops4j.paw.web.cfg.)


I expected to see something in log\karaf.log telling me about syntax 
errors in the jetty.xml, but I didn't, even at DEBUG level, so I can 
have no confidence that the jetty.xml was read (and thus no obvious 
incentive to invest further time working out what to put in it).


On 18/11/2016 14:16, Jean-Baptiste Onofré wrote:

Hi Tim,

when you install the jetty feature, you can override the default 
configuration using etc/org.ops4j.pax.web.cfg.


This cfg file can refer to a jetty.xml using:

org.ops4j.pax.web.config.file=${karaf.base}/etc/jetty.xml

Then the etc/jetty.xml is a jetty file.

Regards
JB

On 11/18/2016 03:11 PM, Tim Ward wrote:

Very simple, I hope, but days of research haven't found an answer that
works yet.

How do change the configuration of Jetty in Karaf? As the simplest
possible initial beginner's question, how do I turn on request logging?

The osgi-dev mailing list referred me here.

(I can actually see what it's doing with requests by setting the log
level to DEBUG in org.ops4j.pax.logging.cfg and then looking in
data\log\karaf.log, but given the volume and format of output that's not
a practical solution.

I've tried putting stuff like
org.ops4j.pax.web.log.ncsa.format=_mm_dd.request.log in
org.ops4j.paw.web.cfg but that doesn't seen to do anything.

I've tried creating a gibberish jetty.xml, pointed to by
org.ops4j.pax.web.config.file in org.ops4j.paw.web.cfg, in the hope of
getting some error messages about the gibberish, showing that at least
something was reading the jetty.xml, but that didn't work. It didn't
work doing the same via configuration.json either.

I haven't really found any actual *documentation* of any of the above,
just snippets of example code, so all my attempts were probably wrong
anyway.)

--
Tim Ward






--
Tim Ward



Beginner's question - Jetty configuration

2016-11-18 Thread Tim Ward
Very simple, I hope, but days of research haven't found an answer that 
works yet.


How do change the configuration of Jetty in Karaf? As the simplest 
possible initial beginner's question, how do I turn on request logging?


The osgi-dev mailing list referred me here.

(I can actually see what it's doing with requests by setting the log 
level to DEBUG in org.ops4j.pax.logging.cfg and then looking in 
data\log\karaf.log, but given the volume and format of output that's not 
a practical solution.


I've tried putting stuff like 
org.ops4j.pax.web.log.ncsa.format=_mm_dd.request.log in 
org.ops4j.paw.web.cfg but that doesn't seen to do anything.


I've tried creating a gibberish jetty.xml, pointed to by 
org.ops4j.pax.web.config.file in org.ops4j.paw.web.cfg, in the hope of 
getting some error messages about the gibberish, showing that at least 
something was reading the jetty.xml, but that didn't work. It didn't 
work doing the same via configuration.json either.


I haven't really found any actual *documentation* of any of the above, 
just snippets of example code, so all my attempts were probably wrong 
anyway.)


--
Tim Ward



Re: SCR Reference annotation on field

2016-08-24 Thread Tim Ward
Release 6 was the first release with formal Maven artifacts delivered by the 
OSGi alliance. All the others were uploaded by helpful members, but didn't 
always match the internal names of the OSGi build artifacts.

The other new thing that release 6 offers is individual spec jars (so you don't 
have to use the entire API at a single release version). The 
org.osgi.service.component.annotations artifacts contains just the DS 
annotations.

Tim

Sent from my iPhone

> On 24 Aug 2016, at 14:13, Alex Soto  wrote:
> 
> Ha!  They renamed the artifact, that is why I could never find it.
> 
> Version 5 was: 
> 
>   org.osgi
>   org.osgi.compendium
> 
> And Version 6 is:
> 
> org.osgi
> osgi.cmpn
> 
> Not sure who decided that the abbreviated form was such an improvement. 
> Saving a few bytes in the name helps anybody?
> 
> Best regards,
> Alex soto
> 
> 
> 
>> On Aug 24, 2016, at 8:57 AM, Alex Soto  wrote:
>> 
>> Thank you Tim,  I knew the version was probably the issue, but I could not 
>> find version 6 of org.osgi.compendium  in any of the public Maven 
>> repositories.
>> Do you know of a public Maven repository where I can get the artifact?
>> 
>> Best regards,
>> Alex soto
>> 
>> 
>> 
>>> On Aug 23, 2016, at 6:32 PM, Tim Ward  wrote:
>>> 
>>> This is absolutely correct. 
>>> 
>>> The "Release 6" version of declarative services supports field injection. 
>>> The "Release 5" version that you are depending on does not!
>>> 
>>> Regards,
>>> 
>>> Tim
>>> 
>>> Sent from my iPhone
>>> 
>>>> On 23 Aug 2016, at 22:43, Alex Soto  wrote:
>>>> 
>>>> 
>>>> Hello,
>>>> 
>>>> I am new SCR, but based on the "The OSGi Alliance OSGi Compendium, Release 
>>>> 6 July 2015"  the Reference annotation can be applied to fields.
>>>> @Reference
>>>> 
>>>> Identify the annotated member as a reference of a Service Component. When 
>>>> the annotation is applied to a method, the method is the bind method of 
>>>> the reference. When the annotation is applied to a field, the field will 
>>>> contain the bound service(s) of the reference. This annotation is not 
>>>> processed at runtime by Service Component Runtime. It must be processed by 
>>>> tools and used to add a Component Description to the bundle. In the 
>>>> generated Component Description for a component, the references must be 
>>>> ordered in ascending lexicographical order (using String.compareTo ) of 
>>>> the reference names.
>>>> 
>>>> The reference element of a Component Description. CLASS
>>>> METHOD,FIELD 
>>>> 
>>>> 
>>>> 
>>>> However, the actual jar declaring this annotation from Maven import: 
>>>> 
>>>>   org.osgi
>>>>   org.osgi.compendium
>>>>   5.0.0
>>>> Does not support Field, only Method.  So I can’t apply the @Reference 
>>>> annotation to fields.
>>>> 
>>>> What am I missing? 
>>>> 
>>>> Best regards,
>>>> 
>>>> Alex soto
>>>> 
> 


Re: SCR Reference annotation on field

2016-08-23 Thread Tim Ward
This is absolutely correct. 

The "Release 6" version of declarative services supports field injection. The 
"Release 5" version that you are depending on does not!

Regards,

Tim

Sent from my iPhone

> On 23 Aug 2016, at 22:43, Alex Soto  wrote:
> 
> Hello,
> 
> I am new SCR, but based on the "The OSGi Alliance OSGi Compendium, Release 6 
> July 2015"  the Reference annotation can be applied to fields.
> @Reference
> 
> Identify the annotated member as a reference of a Service Component. When the 
> annotation is applied to a method, the method is the bind method of the 
> reference. When the annotation is applied to a field, the field will contain 
> the bound service(s) of the reference. This annotation is not processed at 
> runtime by Service Component Runtime. It must be processed by tools and used 
> to add a Component Description to the bundle. In the generated Component 
> Description for a component, the references must be ordered in ascending 
> lexicographical order (using String.compareTo ) of the reference names.
> 
> The reference element of a Component Description. CLASS
> METHOD,FIELD 
> 
> 
> 
> However, the actual jar declaring this annotation from Maven import: 
> 
>   org.osgi
>   org.osgi.compendium
>   5.0.0
> Does not support Field, only Method.  So I can’t apply the @Reference 
> annotation to fields.
> 
> What am I missing? 
> 
> Best regards,
> 
> Alex soto
> 
> 
> 
> 


Re: Aries JPA 2.3.0: mapping file not used

2016-08-17 Thread Tim Ward
The issue appears to be that Aries JPA has no code to handle the mapping files 
element of the persistence unit. As a result no mapping file names are passed 
to the JPA provider.

I think this is an Aries issue.

Tim

Sent from my iPhone

> On 17 Aug 2016, at 21:09, Christian Schneider  wrote:
> 
> Hmmm .. interesting. We do not have code in Aries JPA that specifically 
> handles mapping files (as far as I know).
> So I wonder if this maybe is an issue in hibernate.
> 
> Christian
> 
> 2016-08-17 16:06 GMT+02:00 jochenw :
>> Hi Timothy,
>> 
>> using the tasklist-blueprint-cdi example
>> (https://github.com/cschneider/Karaf-Tutorial/tree/master/tasklist-blueprint-cdi),
>> I have exchanged H2 by PostgreSQL, added an orm.xml, exchanged the
>> datasource configuration with one for a PostgreSQL DB, created a PostgreSQL
>> DB named tasklist and a schema named tasklist_schema. And it works.
>> 
>> Then I changed the name of the mapping file from orm.xml to
>> tasklist_orm.xml, and it started writing the tables to the public schema.
>> 
>> So the problem seems to be that with other mapping file names than orm.xml,
>> it doesnt work. My changes are attached below.
>> 
>> Regards,
>> 
>> Jochen
>> 
>> persistence.xml:
>> 
>> 
>> http://java.sun.com/xml/ns/persistence";
>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>> xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
>> http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd";>
>> 
>> 
>> org.hibernate.jpa.HibernatePersistenceProvider
>> 
>> 
>> 
>> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=tasklist)
>> 
>> 
>> 
>> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=tasklist)
>> META-INF/orm.xml
>> 
>> > value="org.hibernate.dialect.PostgreSQLDialect"/>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> orm.xml:
>> 
>> 
>> http://java.sun.com/xml/ns/persistence/orm";
>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>> xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm
>> orm_2_0.xsd"
>> version="2.0">
>> 
>> 
>> tasklist_schema
>> 
>> 
>> 
>> 
>> 
>> 
>> org.ops4j.datasource-tasklist.cfg:
>> 
>> dataSourceName=tasklist
>> osgi.jdbc.driver.name = PostgreSQL JDBC Driver-pool-xa
>> serverName = localhost
>> portNumber = 5432
>> databaseName = tasklist
>> user = postgres
>> password = postgres
>> 
>> 
>> 
>> last but not least, a change in the features.xml: replace "pax-jdbc-h2" by
>> "pax-jdbc-postgresql"
>> 
>> 
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://karaf.922171.n3.nabble.com/Aries-JPA-2-3-0-mapping-file-not-used-tp4047501p4047569.html
>> Sent from the Karaf - User mailing list archive at Nabble.com.
> 
> 
> 
> -- 
> -- 
> Christian Schneider
> http://www.liquid-reality.de
> 
> Open Source Architect
> http://www.talend.com


Re: RESTful web service in Karaf using CXF and blueprint

2016-08-15 Thread Tim Ward
If the services that you're looking to provide use JAX-RS the there is a 
specification proposal for OSGi R7 that you should look at. Prototyping work is 
starting in Apache Aries now.

Tim

Sent from my iPhone

> On 15 Aug 2016, at 18:56, Scott Lewis  wrote:
> 
>> On 8/15/2016 10:28 AM, Christian Schneider wrote:
>> 
>> 
>> ECF also supports CXF now but I am not sure at what level of CXF features.
> 
> This provider [1], will support all of the jax-rs Configurable/Configuration 
> capabilities that CXF supports.   It will also be possible to extend [1] to 
> create a custom provider, which are of course free use any of CXF's APIs.   
> Same with Jersey.
> 
> Scott
> 
> [1] 
> https://github.com/ECF/JaxRSProviders/tree/master/bundles/org.eclipse.ecf.provider.cxf.server
> 
>> 
>> Christian
>> 
>> 2016-08-15 17:21 GMT+02:00 Marc Durand :
>>> Hello,
>>> I was following Christian's tutorial here:
>>> http://liquid-reality.de/display/liquid/2011/12/22/Karaf+Tutorial+Part+4+-+CXF+Services+in+OSGi
>>> 
>>> And I also found come blog posts from JB that show how to deploy RESTful
>>> services using blueprint.
>>> 
>>> What I couldn't find was an example on how to deploy a RESTful service where
>>> the resource class is an OSGi service (to take advantage of SCR references
>>> to other services in the resource class).  I was able to do it by using a
>>>  element instead of a  element in the blueprint file.  Is
>>> this approach correct or will it lead to other problems down the road?
>>> 
>>> Thanks!
>>> Marc
>>> 
>>> 
>>> 
>>> 
>>> --
>>> View this message in context: 
>>> http://karaf.922171.n3.nabble.com/RESTful-web-service-in-Karaf-using-CXF-and-blueprint-tp4047529.html
>>> Sent from the Karaf - User mailing list archive at Nabble.com.
>> 
>> 
>> 
>> -- 
>> -- 
>> Christian Schneider
>> http://www.liquid-reality.de
>> 
>> Open Source Architect
>> http://www.talend.com
> 


Re: Access control of OSGi Web app?

2016-08-01 Thread Tim Ward
This sounds a lot like what you can do with the security services from en 
Route. You can query for the user's full permission set so that parts of the UI 
can be disabled - obviously this is not a replacement for actually checking 
when the APIs are called!

Tim

Sent from my iPhone

> On 1 Aug 2016, at 07:08, Sigmund Lee  wrote:
> 
> Hi all,
> 
> Thanks for advice and solutions you guys provided.
> 
> Seems like they are all proper ways to protect server-side services. But as I 
> said we are a website, what I need is a solution can integrate frontend & 
> backend together, provide page-level access control. basically two steps 
> involved:
> 
> 1. A externalized access control system to protect access to exposed 
> services(for example, restful service, web url, etc).
> 2. After access is permitted, return corresponding respond page to 
> client(aka, browser), and every button or link on this responded page can be 
> display or hidden based on permissions of current user. 
> 
> Basically, what I need is a solution not only free backend engineers from 
> hard-coded authz code, but also free frontend engineers from hard-coding.
> 
> Thanks again!
> 
> Bests.
> --
> Sig 
> 
> 
> 
>> On Fri, Jul 29, 2016 at 10:02 PM, Achim Nierbeck  
>> wrote:
>> yes, as filters without servlets can't be served. They don't have a URI 
>> binding. 
>> 
>> regards, Achim 
>> 
>> 2016-07-29 15:33 GMT+02:00 Nick Baker :
>>> Hey Achim,
>>> 
>>>  
>>> 
>>> Thanks for this example. We’re looking part of our ongoing OSGi migration 
>>> will be URL security as well. We’re using Spring Security in the legacy 
>>> non-OSGI space. So this is a timely conversation for us J
>>> 
>>>  
>>> 
>>> Quick question: are we still working with the limitation that Filters are 
>>> only invoked if a Servlet or Resource would already serve the URL?
>>> 
>>>  
>>> 
>>> -Nick
>>> 
>>>  
>>> 
>>> From: Achim Nierbeck 
>>> Reply-To: "user@karaf.apache.org" 
>>> Date: Friday, July 29, 2016 at 8:54 AM
>>> To: "user@karaf.apache.org" 
>>> Subject: Re: Access control of OSGi Web app?
>>> 
>>>  
>>> 
>>> Hi Sigmund, 
>>> 
>>>  
>>> 
>>> sorry for being late to the party ... if those solutions above don't work 
>>> for you you still have the possibility to create a customized filter which 
>>> you can re-use with your own applications. 
>>> 
>>> For this you can either go the "classical" way of using web-fragments, or 
>>> you can share the httpContext between your osgi bundles. For this you need 
>>> to declare your httpContext to be sharable and after that you just need to 
>>> attach your filter bundle to that sharable httpContext. 
>>> 
>>>  
>>> 
>>> Take a look at the following Sample, or better integration test of Pax Web 
>>> [1]. 
>>> 
>>>  
>>> 
>>> regards, Achim 
>>> 
>>>  
>>> 
>>> [1] - 
>>> https://github.com/ops4j/org.ops4j.pax.web/blob/master/pax-web-itest/pax-web-itest-container/pax-web-itest-container-jetty/src/test/java/org/ops4j/pax/web/itest/jetty/CrossServiceIntegrationTest.java#L59-L95
>>> 
>>>  
>>> 
>>> 2016-07-26 16:05 GMT+02:00 Christian Schneider :
>>> 
>>> In karaf authentication is based on JAAS. Using login modules you can 
>>> define what source to authenticate against.
>>> The karaf web console is protected by this by default. It is also possible 
>>> to enable JAAS based authentication for CXF e.g. for your REST services.
>>> There is also role based  and group based authentication out of the box.
>>> 
>>> There is no attribute based access control available but you can create 
>>> this based on the JAAS authentication.
>>> 
>>> This code can give you an idea of how to get the subject and the principals 
>>> from JAAS in karaf: 
>>> https://github.com/apache/aries/blob/trunk/blueprint/blueprint-authz/src/main/java/org/apache/aries/blueprint/authorization/impl/AuthorizationInterceptor.java#L69-L81
>>> 
>>> You could create your own annotations or OSGi service to handle the 
>>> attribute based authorization based on the authentication information.
>>> 
>>> Christian
>>> 
>>> 
>>> 
>>> On 26.07.2016 08:29, Sigmund Lee wrote:
>>> 
>>> We are a website, using OSGi as microservices implementation. every feature 
>>> of our site is a standalone osgi-based webapp, and splited into several 
>>> OSGi bundles(api, impl, webapp, rest, etc). 
>>> 
>>>  
>>> 
>>> But there are functions that coupled with more that one bundle, for example 
>>> Access Control & Authorization. Currently our authorization code is 
>>> hard-coded everywhere and was so hard to maintain. 
>>> 
>>>  
>>> 
>>> My question is, what's the proper way to handle with access control when 
>>> using OSGi? Is there any osgi-compatible ABAC(Attribute-based access 
>>> control, because our authorization model need calculated based on attribute 
>>> of resource and context/environment) framework?
>>> 
>>> 
>>> Thanks.
>>> 
>>>  
>>> 
>>> --
>>> 
>>> Sig 
>>> 
>>>  
>>> 
>>>  
>>> 
>>> -- 
>>> Christian Schneider
>>> http://www.liquid-reality.de
>>>  
>>> Open Source Archite

Re: Removal of start levels from Karaf 4.0.2 onwards - transaction manager not available in time

2016-07-27 Thread Tim Ward
Hi Jochen,

Sent from my iPhone

> On 27 Jul 2016, at 07:47, jochenw  wrote:
> 
> Hi Tim,
> 
> the transaction control service sounds interesting. I haven't found some
> example or tutorial which shows how to used it with JPA. Do you know whether
> something like this exists? Would be helpful to get started. I'm sure that
> the documentation pages contain all relevant information, but from an
> example it would be easier to see how to put all this together.

As the Transaction Control service is pretty new (the RFC started in January 
and the first Aries release was in June) there aren't many examples out there. 
The docs are pretty good by Aries standards, and I'd be keen to know what extra 
you'd like to see there.

As for what you need to do, you just need to inject a JPAEntityManagerProvider 
and a TransactionControl into your DAO and combine them into an EntityManager. 
The whole thing then works just like the JDBC examples (i.e. define your scope 
using transaction control and use the EntityManager normally).

The JPA persistence XML can be basically empty (just a name) and pointed at by 
a Meta-Persistence header. You can then do all the setup with a configuration 
as described here. 

http://aries.apache.org/modules/tx-control/xaJPA.html

Tim

> 
> Regarding the karaf feature: as Christian wrote, it wouldn't be needed to
> include that to the basic Karaf enterprise feature file. But is the package
> would have a Karaf feature file on its own, one could just include it to a
> custom Karaf composition using the karaf-maven-plugin. Already possible e.g.
> for ActiveMQ, Camel, Shiro, CXF etc. etc. etc.
> 
> We are planning to switch to JPA 2.x to solve our most urgent issues. Is in
> principle a small change, however, it needs some special handling when using
> a base class for the DAOs which contains the basic operations, since it is
> not possible to define the EntityManager in the base class (not for classes
> sharing the same persistence unit due to some bug in the framework - don't
> know whether that has been solved inbetween, and of course not if the base
> class should be used by classes which use different persistence units).
> 
> Regards,
> Jochen
> 
> 
> 
> --
> View this message in context: 
> http://karaf.922171.n3.nabble.com/Removal-of-start-levels-from-Karaf-4-0-2-onwards-transaction-manager-not-available-in-time-tp4047189p4047341.html
> Sent from the Karaf - User mailing list archive at Nabble.com.


Re: Access control of OSGi Web app?

2016-07-26 Thread Tim Ward
There are Authentication 
(https://github.com/osgi/design/blob/master/rfps/rfp-0164-Authentication.pdf) 
and Authorisation 
(https://github.com/osgi/design/blob/master/rfps/rfp-0165-Authorization.pdf) 
RFPs with the OSGi Alliance that talk about these issues. 

OSGi enRoute (http://enroute.osgi.org) provided the original work which 
inspired the RFPs, and also provides some usable implementations.

I hope this helps,

Tim Ward

IoT EG Chair, OSGi Alliance

Sent from my iPhone

> On 26 Jul 2016, at 07:29, Sigmund Lee  wrote:
> 
> We are a website, using OSGi as microservices implementation. every feature 
> of our site is a standalone osgi-based webapp, and splited into several OSGi 
> bundles(api, impl, webapp, rest, etc). 
> 
> But there are functions that coupled with more that one bundle, for example 
> Access Control & Authorization. Currently our authorization code is 
> hard-coded everywhere and was so hard to maintain. 
> 
> My question is, what's the proper way to handle with access control when 
> using OSGi? Is there any osgi-compatible ABAC(Attribute-based access control, 
> because our authorization model need calculated based on attribute of 
> resource and context/environment) framework?
> 
> 
> Thanks.
> 
> --
> Sig 


Re: WeavingHook using Felix

2016-07-16 Thread Tim Ward
I would like to add that you need to be *very* careful with weaving hooks. If 
you have to add dynamic imports then you should make sure to add all the 
necessary version/target information

It is also difficult to add a weaving hook using DS or blueprint. Lazy 
behaviours cause the Weaving hook to try to weave itself. 

Weaving hooks are also one of the few times that start up ordering really is 
important in OSGi. Ideally your hook will have no service dependencies, no 
configuration dependencies, be registered eagerly very early in the framework 
start process, and be very selective about what it tries to weave.

I also have one question for you. What are you trying to achieve and do you 
*really* need a weaving hook to do it? In most cases there are other, safer 
ways to achieve the same thing. 

Tim Ward

OSGi IoT EG chair

Sent from my iPhone

> On 16 Jul 2016, at 03:15, David Jencks  wrote:
> 
> I hope there aren’t any uses that don’t work with any R5 framework.   I 
> believe Apache Aries proxy and (IIRC) spi-fly use weaving hooks.
> 
> hope this helps
> david jencks
> 
>> On Jul 15, 2016, at 4:51 PM, Pratt, Jason  wrote:
>> 
>> Hello - Can anyone recommend a good example for WeavingHook that uses the 
>> Felix framework?
>>  
>> Regards,
>> Jason
> 


Re: Reasons that triggers IllegalStateException: Invalid BundleContext

2016-06-30 Thread Tim Ward
Hi Cristiano,

That exception means that you are trying to use a bundle context which is no 
longer valid because the bundle has been stopped.

There are all sorts of ways that code can end up hanging on to a Bundle Context 
when it shouldn't, and it may be caused by something as simple as a race 
condition on shutdown, all the way through to a completely invalid design.

My advice would be not to use the BundleContext or a Bundle Activator in your 
code at all, and to use a framework like DS instead. DS will a manage the 
lifecycle of your components so that you don't need to use a BundleContext at 
all.

Best Regards,

Tim Ward

OSGi Alliance IoT EG Chair

> On 30 Jun 2016, at 15:57, Cristiano Costantini 
>  wrote:
> 
> Hello All,
> 
> I'n our application it happen sometime to find in situations where we get the 
> "Invalid BundleContext" exception:
> 
> java.lang.IllegalStateException: Invalid BundleContext. 
> at 
> org.apache.felix.framework.BundleContextImpl.checkValidity(BundleContextImpl.java:453)
> 
> What are the potential reasons such exception may be thrown?
> I'm searching to understand so I can hunt for a potential design issue in 
> some of our bundles... I've searched the web but I've found no hint.
> 
> Thank you!
> Cristiano
> 


Re: exported package in bundle A is not able to be imported in bundle B

2016-03-29 Thread Tim Ward
Using an Activator minimises your module's dependencies, but at the expense of 
getting any help in this sort of situation. It's also really easy to make 
mistakes when doing it, and hard to get useful integration with things like 
configuration admin. I would always recommend that an application use a 
container (usually Declarative Services) to register and/or consume OSGi 
services.

Regards,

Tim

Sent from my iPhone

> On 29 Mar 2016, at 17:39, asookazian2  wrote:
> 
> Hi Timothy, thx for your response.  It turns out the Blueprint config.xml
> with the service registration (which was working previously) was removed and
> replaced with an Activator class which had the service registration line
> commented out.  This change was done by another developer and not
> communicated to me but anyways root cause is found.
> 
> Any recommendations on using BP config.xml vs. Activator class (e.g. you can
> debug the Activator class)?  thx.
> 
> 
> Timothy Ward wrote
>> The error that you've provided indicates a missing service dependency for
>> a service exposing the com.foo.bar.myClass interface. 
>> 
>> You should investigate why this service is not present, or if it is, why
>> bundle B has not wired to the same class space for package com.foo.bar as
>> the service provider.
>> 
>> Regards
>> 
>> Tim Ward
>> 
>> OSGi IoT EG Chair
>> 
>>> On 28 Mar 2016, at 23:36, asookazian2 <
> 
>> asookazian@
> 
>> > wrote:
>>> 
>>> Karaf 3.0.3
>>> 
>>> bundle B is in GracePeriod (and ultimately Failure) with dependency on
>>> bundle A (which is active and has lower start-level than bundle B).
>>> 
>>> eventually bundle:diag 123 (for bundle B) gives:
>>> 
>>> Status: Failure
>>> Blueprint
>>> 3/28/16 3:24 PM
>>> Exception: 
>>> null
>>> java.util.concurrent.TimeoutException
>>>   at
>>> org.apache.aries.blueprint.container.BlueprintContainerImpl$1.run(BlueprintContainerImpl.java:336)
>>>   at
>>> org.apache.aries.blueprint.utils.threading.impl.DiscardableRunnable.run(DiscardableRunnable.java:48)
>>>   at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>   at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>>   at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>>   at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>   at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>   at java.lang.Thread.run(Thread.java:745)
>>> 
>>> Missing dependencies: 
>>> (objectClass=com.foo.bar.myClass)
>>> 
>>> When I run package:exports | grep com.foo.bar I see only the one expected
>>> bundle which is exported same package (I've triple-checked the
>>> manifest.mf
>>> files for both bundles in data/cache/xyz).  i.e. there is no
>>> split-package
>>> problem in this scenario afaik.
>>> 
>>> Any idea how/why this happens and how to resolve?  thx.
>>> 
>>> 
>>> 
>>> --
>>> View this message in context:
>>> http://karaf.922171.n3.nabble.com/exported-package-in-bundle-A-is-not-able-to-be-imported-in-bundle-B-tp4046019.html
>>> Sent from the Karaf - User mailing list archive at Nabble.com.
> 
> 
> 
> 
> 
> --
> View this message in context: 
> http://karaf.922171.n3.nabble.com/exported-package-in-bundle-A-is-not-able-to-be-imported-in-bundle-B-tp4046019p4046042.html
> Sent from the Karaf - User mailing list archive at Nabble.com.


Re: exported package in bundle A is not able to be imported in bundle B

2016-03-29 Thread Tim Ward
The error that you've provided indicates a missing service dependency for a 
service exposing the com.foo.bar.myClass interface. 

You should investigate why this service is not present, or if it is, why bundle 
B has not wired to the same class space for package com.foo.bar as the service 
provider.

Regards

Tim Ward

OSGi IoT EG Chair

> On 28 Mar 2016, at 23:36, asookazian2  wrote:
> 
> Karaf 3.0.3
> 
> bundle B is in GracePeriod (and ultimately Failure) with dependency on
> bundle A (which is active and has lower start-level than bundle B).
> 
> eventually bundle:diag 123 (for bundle B) gives:
> 
> Status: Failure
> Blueprint
> 3/28/16 3:24 PM
> Exception: 
> null
> java.util.concurrent.TimeoutException
>at
> org.apache.aries.blueprint.container.BlueprintContainerImpl$1.run(BlueprintContainerImpl.java:336)
>at
> org.apache.aries.blueprint.utils.threading.impl.DiscardableRunnable.run(DiscardableRunnable.java:48)
>at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>at java.lang.Thread.run(Thread.java:745)
> 
> Missing dependencies: 
> (objectClass=com.foo.bar.myClass)
> 
> When I run package:exports | grep com.foo.bar I see only the one expected
> bundle which is exported same package (I've triple-checked the manifest.mf
> files for both bundles in data/cache/xyz).  i.e. there is no split-package
> problem in this scenario afaik.
> 
> Any idea how/why this happens and how to resolve?  thx.
> 
> 
> 
> --
> View this message in context: 
> http://karaf.922171.n3.nabble.com/exported-package-in-bundle-A-is-not-able-to-be-imported-in-bundle-B-tp4046019.html
> Sent from the Karaf - User mailing list archive at Nabble.com.


Re: importing and exporting same package in same bundle

2016-03-29 Thread Tim Ward
Hello,

Having both an export and an import declared for a package is not an error, and 
the behaviour is explicitly defined in the OSGi core specification. The process 
of importing and exporting a package is known as making the package 
"substitutable".

When you have both an export and an import the OSGi framework gets to decide 
whether your bundle will use its own copy of the package and export it to the 
outside world, or whether it will import the package from another bundle and 
hide both the export and the internal copy.

In fact I would usually advise not disabling this behaviour. Having a 
substitutable package is desirable in many cases! If, for example, you have two 
bundles that expose the same service interface and they both export, but do not 
import, the API package then a client bundle can never use both service 
implementations (it will only be wired to one of the exported packages)!

As with many things context is king - more information about what is in the 
package and how it is used is needed before a specific recommendation can be 
made for this bundle.

Regards,

Tim Ward

OSGi IoT EG Chair

> On 29 Mar 2016, at 09:25, CLEMENT Jean-Philippe 
>  wrote:
> 
> Hello,
> 
> For sure if the import is not necessary, then remove it :)
> 
> You may "ask" maven-bundle-plugin not to generate the import or you may put 
> the dependency generating the import as optional.
> 
> JP
> 
> -Message d'origine-
> De : asookazian2 [mailto:asookaz...@gmail.com] 
> Envoyé : mardi 29 mars 2016 00:55
> À : user@karaf.apache.org
> Objet : importing and exporting same package in same bundle
> 
> what are the side-effects/consequences of importing and exporting same 
> package in same bundle?  Is this normal or bad practice and if we are doing 
> this in our manifest.mf, how do we correct this in the pom.xml and 
> maven-bundle-plugin config/instructions?  thx.
> 
> 
> 
> --
> View this message in context: 
> http://karaf.922171.n3.nabble.com/importing-and-exporting-same-package-in-same-bundle-tp4046020.html
> Sent from the Karaf - User mailing list archive at Nabble.com.


<    1   2