Re: Using Karaf with Java 9

2016-12-07 Thread Guillaume Nodet
Note that if you don't build locally, you will certainly need to add the
maven repository in etc/org.ops4j.pax.url.mvn.cfg so that karaf can
download all the snapshots needed.

2016-12-07 18:41 GMT+01:00 Guillaume Nodet :

> Yes, try with 4.1.0-SNAPSHOT.
>   https://repository.apache.org/content/groups/snapshots/
> org/apache/karaf/apache-karaf-minimal/4.1.0-SNAPSHOT/
>
> 2016-12-07 18:27 GMT+01:00 Gunnar Morling :
>
>> I'm using the latest stable, 4.0.7.
>>
>> What could I try instead, 4.1.0-SNAPSHOT? Are you deploying snapshots
>> somewhere? Or do I need to build from source myself?
>>
>> 2016-12-07 18:10 GMT+01:00 Jean-Baptiste Onofré :
>>
>>> Hi
>>>
>>> Improvements have already been done. Which version do you try ? A
>>> SNAPSHOT ?
>>>
>>> Regards
>>> JB
>>> On Dec 7, 2016, at 18:05, Gunnar Morling  wrote:
>>>>
>>>> Hi,
>>>>
>>>> Has anyone had success with running Karaf on Java 9?
>>>>
>>>> When trying to start the container, I'm running into KARAF-3518 [1]
>>>> which is related to "endorsed" directories not working any longer under
>>>> Java 9. The issue is marked as unresolved; are there any efforts to make
>>>> Karaf usable with Java 9? This wouldn't have to include support for Jigsaw
>>>> in the first step, just being able to run on 9 would be a great 
>>>> achievement.
>>>>
>>>> Thanks,
>>>>
>>>> --Gunnar
>>>>
>>>> [1] https://issues.apache.org/jira/browse/KARAF-3518
>>>>
>>>>
>>
>
>
> --
> 
> Guillaume Nodet
> 
> Red Hat, Open Source Integration
>
> Email: gno...@redhat.com
> Web: http://fusesource.com
> Blog: http://gnodet.blogspot.com/
>
>


-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: how to get Karaf's terminal width?

2016-12-07 Thread Guillaume Nodet
 |   at Proxy1c55bc35_2cb4_47b2_b9f4_f593e49a68ce.execute(Unknown
> Source)
> |   at org.apache.felix.gogo.runtime.CommandProxy.execute(CommandPr
> oxy.java:78)
> |   at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.jav
> a:480)
> |   at org.apache.felix.gogo.runtime.Closure.executeStatement(Closu
> re.java:406)
> |   at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)
> |   at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:182)
> |   at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:119)
> |   at org.apache.felix.gogo.runtime.CommandSessionImpl.execute(Com
> mandSessionImpl.java:94)
> |   at org.apache.karaf.shell.console.impl.jline.ConsoleImpl.run(
> ConsoleImpl.java:210)
> |   at org.apache.karaf.shell.console.impl.jline.LocalConsoleManage
> r$2$1$1.run(LocalConsoleManager.java:109)
> |   at java.security.AccessController.doPrivileged(Native
> Method)[:1.8.0_60]
> |   at org.apache.karaf.jaas.modules.JaasHelper.doAs(JaasHelper.jav
> a:57)[28:org.apache.karaf.jaas.modules:3.0.5]
> |   at org.apache.karaf.shell.console.impl.jline.LocalConsoleManage
> r$2$1.run(LocalConsoleManager.java:102)[27:org.apache.karaf.
> shell.console:3.0.5]
>
> What am I missing?
>
> Thanks!
> -Max
>
>


-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: how to get Karaf's terminal width?

2016-12-07 Thread Guillaume Nodet
Can you run a "mvn dependency:tree" on your bundle  ? I do suspect you have
a jline 1.x somewhere, as found by the maven bundle plugin when computing
the package version.

2016-12-07 20:06 GMT+01:00 Max Spring :

> Yes, I probably have here some build issue going on.
>
> I've got the jline bundle v2.13.0 in my container:
>
> | karaf@root()> list -s -t 0 | grep jline
> |  22 | Active   |  30 | 2.13.0   | jline
> |
> | karaf@root()> bundle:headers 22
> |
> | JLine (22)
> | --
> | Archiver-Version = Plexus Archiver
> | Originally-Created-By = Apache Maven Bundle Plugin
> | Created-By = Apache Maven Bundle Plugin
> | Manifest-Version = 1.0
> | Bnd-LastModified = 1439224319120
> | Build-Jdk = 1.8.0_45
> | Built-By = gnodet
> | Tool = Bnd-2.4.1.201501161923
> |
> | Bundle-License = http://www.opensource.org/licenses/bsd-license.php
> | Bundle-ManifestVersion = 2
> | Bundle-SymbolicName = jline
> | Bundle-Version = 2.13.0
> | Bundle-Name = JLine
> | Bundle-Description = Sonatype helps open source projects to set up Maven
> repositories on https://oss.sonatype.org/
> |
> | Require-Capability =
> |   osgi.ee;filter:=(&(osgi.ee=JavaSE)(version=1.5))
> |
> | Export-Package =
> |   jline;uses:=jline.internal;version=2.13.0,
> |   jline.console;uses:="jline,jline.console.completer,jline.co
> nsole.history";version=2.13.0,
> |   jline.console.completer;uses:=jline.console;version=2.13.0,
> |   jline.console.history;version=2.13.0,
> |   jline.console.internal;version=2.13.0,
> |   jline.internal;version=2.13.0,
> |   org.fusesource.jansi;version=1.11
>
> The Maven artifact with version 2.13 is identical to the cached bundle
> (not sure why the Maven artifact doesn't have the micro version, though):
>
> | $ diff karaf/data/cache/bundle22/version0.0/bundle.jar
> ~/.m2/repository/jline/jline/2.13/jline-2.13.jar; echo $?
> | 0
>
> If I do this in my POM
>
> |   
> | ...
> | 
> |   jline
> |   jline
> |   2.13
> | 
> | ...
> |   
> |
> |   
> | 
> |   
> | org.apache.felix
> | maven-bundle-plugin
> | true
> | true
> | 
> |   
> | ${project.artifactId} Name>
> |   
> | 
> |   
> | 
> |   
>
> then when I try to install my feature which references my bundle with the
> example command, I get:
>
> | karaf@root()> feature:install my-feature
> | no such process "maven/boot" to wait for
> | Error executing command: Can't install feature my-feature/0.0.0:
> | Could not start bundle mvn:org.example/example-bundle/1.0.0-SNAPSHOT in
> feature(s) example-bundle-1.0.0-SNAPSHOT: Unresolved constraint in bundle
> example-bundle [195]: Unable to resolve 195.0: missing requirement [195.0]
> osgi.wiring.package; (&(osgi.wiring.package=jline)(
> version>=0.9.0)(!(version>=1.0.0)))
>
> But when I explicitly specify the version of the jline package, my bundle
> does install:
>
> |   
> | ...
> | 
> |   jline
> |   jline
> |   2.13
> | 
> | ...
> |   
> |
> |   
> | 
> |   
> | org.apache.felix
> | maven-bundle-plugin
> | true
> | true
> | 
> |   
> | ${project.artifactId} Name>
> |  
> |jline*;version="2.13.0",
> |*
> |  
> |   
> | 
> |   
> | 
> |   
>
> I guess this leads to my bundle not finding the correct Terminal classes.
>
> -Max
>
>
> On 12/07/2016 10:42 AM, Guillaume Nodet wrote:
>
>> The second approach should definitely work, see
>>   https://github.com/apache/karaf/blob/karaf-3.0.x/shell/comma
>> nds/src/main/java/org/apache/karaf/shell/commands/impl/
>> MoreAction.java#L40
>>
>> The exception is a bit unexpected.  Maybe you're compiling against a very
>> old version of jline ? In JLine 1.x, the Terminal was an abstract class,
>> but it has been changed to an interface in jline 2.x.
>>
>> 2016-12-07 19:16 GMT+01:00 Max Spring > m2spr...@springdot.org>>:
>>
>>
>> Is there a way to get the actual jline terminal object in a Karaf
>> command?
>> I want to format the command output depending on the terminal width.
>> I'm on Ubuntu 16.04, Java 1.8.0_60-x64, and Karaf 3.0.5.
>>
>> This one
>>
>>   package org.example;
>>
>>   import jline.TerminalFactory;
>>

Re: Problems after removing a feature repository in Karaf 4.0

2016-12-12 Thread Guillaume Nodet
2016-12-12 15:16 GMT+01:00 Frank_S :

> Hi,
>
> we run our application in Karaf. It has a pluggable architecture in the
> sense that extra bundles get installed that perform certain services. The
> services are defined using Blueprint descriptors. For installing the
> bundles, we use the features framework of Karaf. More specifically, we
> generate a feature repository file, add the feature repository, install the
> feature, remove the feature repository, and then delete the generated
> feature repository file.
> This worked well in Karaf 2.4.0, but if we do this in Karaf 4.0, the first
> feature install succeeds, but a subsequent feature install yields the
> following exception :
>
> org.osgi.service.resolver.ResolutionException: Unable to resolve root:
> missing requirement [root] osgi.identity;
> osgi.identity=com.ikanalm.phase.echoparameters; type=karaf.feature;
> version="[1.0.0,1.0.0]";
> filter:="(&(osgi.identity=com.ikanalm.phase.echoparameters)(
> type=karaf.feature)(version>=1.0.0)(version<=1.0.0))"
> at
> org.apache.felix.resolver.ResolutionError.toException(
> ResolutionError.java:42)[8:org.apache.karaf.features.core:4.0.7]
> at
> org.apache.felix.resolver.ResolverImpl.resolve(
> ResolverImpl.java:235)[8:org.apache.karaf.features.core:4.0.7]
> at
> org.apache.felix.resolver.ResolverImpl.resolve(
> ResolverImpl.java:158)[8:org.apache.karaf.features.core:4.0.7]
> at
> org.apache.karaf.features.internal.region.SubsystemResolver.resolve(
> SubsystemResolver.java:216)[8:org.apache.karaf.features.core:4.0.7]
> at
> org.apache.karaf.features.internal.service.Deployer.
> deploy(Deployer.java:263)[8:org.apache.karaf.features.core:4.0.7]
> at
> org.apache.karaf.features.internal.service.FeaturesServiceImpl.
> doProvision(FeaturesServiceImpl.java:1176)[8:org.apache.karaf.features.
> core:4.0.7]
> at
> org.apache.karaf.features.internal.service.FeaturesServiceImpl$1.call(
> FeaturesServiceImpl.java:1074)[8:org.apache.karaf.features.core:4.0.7]
> at java.util.concurrent.FutureTask.run(FutureTask.
> java:266)[:1.8.0_60]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)[:1.8.0_60]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)[:1.8.0_60]
> at java.lang.Thread.run(Thread.java:745)[:1.8.0_60]
>
> The bundle com.ikanalm.phase.echoparameters, version 1.0.0 is the one that
> was installed using the generated feature repository file, which looks like
> this :
> http://karaf.apache.org/xmlns/features/v1.2.1";
> name="com.ikanalm.phase.echoparameters1.0.0_1481545020697">
> 
> file://SOMEPATH/com.ikanalm.phase.echoparameters-
> 1.0.0.jar
> 
> 
>
> The feature doesn't show up when I do "feature:list -i", and the feature
> repository isn't listed when I do "feature:repo-list". The bundle, however,
> is still installed, so our application is able to use it.
>
> Unfortunately, the feature is still listed in the requirements list of the
> “root” instance :
> karaf@root()> requirement-list
> Region | Requirement
> ---
> root   | feature:com.ikanalm.phase.echoparameters/[1.0.0,1.0.0]
>
> When I do a “requirement-remove” on the feature, the feature is
> uninstalled,
> and after that I can again install other features without the exception
> popping up, but regrettably, the bundle is also uninstalled, so that
> doesn't
> really help us.
>
> Is there a way to remove a requirement on an instance without uninstalling
> the feature or the bundle ? Or is the removal of the feature repository a
> bad idea, and are we wrong to use the feature framework merely as a
> “vessel”
> to get our bundles installed ?
>

You shouldn't really remove a repository if you haven't uninstalled the
features in that repository.  There's an additional check in 4.1 which
forbids you to do that (see https://issues.apache.org/jira/browse/KARAF-4060
).

And if you uninstall the feature, the bundles will be uninstalled if they
are not needed anymore.

The reason is that the Karaf 4 features service considers the set of
requirements and computes a set of bundles that need to be installed.  If
you remove a requirement on the feature, the bundle does not need to be
installed anymore.  And if you remove the repository, the requirement on
the feature can't be satisfied anymore.

There's no easy workaround for this specific problem: there's definitely no
way to keep a feature installed and remove the corresponding repository.
Why do you see keeping the repository 

Re: Don't understand: missing requirement

2016-12-14 Thread Guillaume Nodet
The bundle has a requirement on an OSGi service using the interface/class
com.y.SomeAPI

2016-12-14 18:10 GMT+01:00 CLEMENT Jean-Philippe <
jean-philippe.clem...@fr.thalesgroup.com>:

> Hello,
>
> We did a small change somewhere leading a Karaf freeze. Karaf (4.0.7) is
> assembled with our features.
>
> The log contains:
> [caused by: Unable to resolve com.X/1.0.5.SNAPSHOT: missing requirement
> [com.X/1.0.5.SNAPSHOT] osgi.service; effective:=active;
> filter:="(objectClass=com.y.SomeAPI)"]
>
> I don't really catch - what does this message mean?
>
> Thank you :)
>
> Regards,
> JP
>



-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: SCP and SFTP not working

2017-01-05 Thread Guillaume Nodet
No, sftp and scp are fully supported by SSHD.
In 2.2.x, scp seems enabled, but not ftp:

https://github.com/apache/karaf/blob/karaf-2.2.x/shell/ssh/src/main/resources/OSGI-INF/blueprint/shell-ssh.xml#L86-L92

3.x has both scp and ftp enabled:

https://github.com/apache/karaf/blob/karaf-3.0.x/shell/ssh/src/main/resources/OSGI-INF/blueprint/shell-ssh.xml#L103-L120

2017-01-05 17:37 GMT+01:00 Achim Nierbeck :

> Hi,
>
> as Karaf uses Apache Mina, for it's SSH server you'll most likely need to
> take a look if it's even possible.
> AFAIK it's only a simple SSH server, so no extra functionality like sftp
> or scp are supported.
> If you want to have a full-blown SSH server you need to use the operating
> system supported ones.
>
> regards, Achim
>
>
> 2017-01-05 17:25 GMT+01:00 Charles :
>
>> Hi
>> I’m trying to verify if I’ve found an issue with the SFTP/SCP server
>> within Karaf, and in the inherited project Servicemix.  I use Camel to
>> perform many integrations, storing properties files within the /etc folder.
>> Ideally I’d like access to the /etc folder within Karaf to allow updates
>> to the configuration files by my build tooling.
>> Sadly I’ve not managed to get the SFTP / SCP faciility to work.  I’m
>> running Servicemix 4.5.3 (Karaf 2.2) on Java 1.6_45, on Windows primarily.
>> Any attempts to use SFTP via FTP client results in an error, or results in
>> a folder presented to the client, with a  name that’s actually a date,
>> which can’t be navigated.
>> Thinking this may be a platform issue I’ve tried it on later vanilla
>> Servicemix versions 6.0, 7.0 m2, and Karaf 4.  I’ve tried on those versions
>> on Linux, and using Java 7 & 8.  No combination results in filesystem
>> access.
>> Can anyone confirm this facility works under other Karaf versions or
>> environments?  Or am I doing something really dumb?
>> Thanks
>>
>
>
>
> --
>
> Apache Member
> Apache Karaf <http://karaf.apache.org/> Committer & PMC
> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer &
> Project Lead
> blog <http://notizblog.nierbeck.de/>
> Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS>
>
> Software Architect / Project Manager / Scrum Master
>
>


-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: Aw: Re: Shell commands are unavailable #1

2017-01-05 Thread Guillaume Nodet
gogo.runtime.Pipe.call(Pipe.java:229)
>>> at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:59)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1142)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>> lExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Command not found: logout
>>> karaf@root()> system:shutdown
>>> [org.apache.karaf.shell.support.ShellUtil] : Unknown command entered
>>> org.apache.felix.gogo.runtime.CommandNotFoundException: Command not
>>> found: system:shutdown
>>> at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:549)
>>> at org.apache.felix.gogo.runtime.Closure.executeStatement(Closu
>>> re.java:478)
>>> at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:367)
>>> at org.apache.felix.gogo.runtime.Pipe.doCall(Pipe.java:417)
>>> at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:229)
>>> at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:59)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1142)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>> lExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Command not found: system:shutdown
>>>
>>> This is a hard blocker right now. May someone please have a look at this.
>>>
>>> Thanks a lot!
>>>
>>> Regards,
>>> Jens
>>>
>>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org
>> http://blog.nanthrax.net[http://blog.nanthrax.net]
>> Talend - http://www.talend.com[http://www.talend.com]
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: Aw: Re: Shell commands are unavailable #1

2017-01-06 Thread Guillaume Nodet
Thx, I pushed my fix.
So the command is correctly discovered, however, the script never returns
for some reason.  I'll investigate.

2017-01-06 8:59 GMT+01:00 Achim Nierbeck :

> @Guillaume
> look here: https://issues.apache.org/jira/browse/KARAF-4926
>
> :)
>
> regards, Achim
>
> 2017-01-06 8:33 GMT+01:00 Guillaume Nodet :
>
>> I found the problem and I have a fix locally.
>> Has a jira been raised already ?
>>
>> 2017-01-06 7:39 GMT+01:00 Jean-Baptiste Onofré :
>>
>>> That's a good point and actually, it's the real end-user usage of shell
>>> (to install the wrapper without starting a Karaf instance).
>>>
>>> Let me take a look.
>>>
>>> Thanks,
>>> Regards
>>> JB
>>>
>>>
>>> On 01/06/2017 07:37 AM, Jens Offenbach wrote:
>>>
>>>> Ok, I can live with that, but the command "shell wrapper:install
>>>> --start-type DEMAND_START" should work or has something been changed?
>>>>
>>>> Give it a try:
>>>> $ shell wrapper:install --start-type DEMAND_START
>>>> [org.apache.karaf.shell.impl.console.ConsoleSessionImpl] :
>>>> completionMode property is not defined in etc/org.apache.karaf.shell.cfg
>>>> file. Using default completion mode.
>>>> [org.apache.karaf.shell.support.ShellUtil] : Unknown command entered
>>>> org.apache.felix.gogo.runtime.CommandNotFoundException: Command not
>>>> found: wrapper:install
>>>> at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.jav
>>>> a:549)
>>>> at org.apache.felix.gogo.runtime.Closure.executeStatement(Closu
>>>> re.java:478)
>>>> at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:3
>>>> 67)
>>>> at org.apache.felix.gogo.runtime.Pipe.doCall(Pipe.java:417)
>>>> at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:229)
>>>> at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:59)
>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>>> Executor.java:1142)
>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>>> lExecutor.java:617)
>>>> at java.lang.Thread.run(Thread.java:745)
>>>> Command not found: wrapper:install
>>>>
>>>> Thanks!
>>>>
>>>>
>>>> Gesendet: Freitag, 06. Januar 2017 um 07:32 Uhr
>>>> Von: "Jean-Baptiste Onofré" 
>>>> An: user@karaf.apache.org
>>>> Betreff: Re: Aw: Shell commands are unavailable #1
>>>> Hi Jens,
>>>>
>>>> shell doesn't really start a full Karaf instance, so, it's an expected
>>>> behavior IMHO.
>>>>
>>>> Regards
>>>> JB
>>>>
>>>> On 01/06/2017 06:50 AM, Jens Offenbach wrote:
>>>>
>>>>> The problem seems to occur only when using the "shell" script from the
>>>>> "bin" folder. Using "./start", followed by "./client", the commands
>>>>> "system:shutdown" and "logout" are working properly.
>>>>>
>>>>> Jens
>>>>>
>>>>>
>>>>> Gesendet: Freitag, 06. Januar 2017 um 06:43 Uhr
>>>>> Von: "Jens Offenbach" 
>>>>> An: user@karaf.apache.org
>>>>> Betreff: Shell commands are unavailable
>>>>> Hi,
>>>>> I am using Apache Karaf 4.1.0-Snapshot and in the current snapshot
>>>>> release, there seems to be a problem regarding shell commands. I have
>>>>> opened the following issue: https://issues.apache.org/jira
>>>>> /browse/KARAF-4926. But things seem to be much more problematic. Even
>>>>> the "logout" command is reported as unavailable.
>>>>>
>>>>> I am using the last snapshot (apache-karaf-4.1.0-20170105
>>>>> <0201%2070105>.165025-279.tar.gz") from the Apache snapshot
>>>>> repository.
>>>>>
>>>>> Give it a try:
>>>>> $ ./shell
>>>>> [org.apache.karaf.shell.impl.console.ConsoleSessionImpl] :
>>>>> completionMode property is not defined in etc/org.apache.karaf.shell.cfg
>>>>> file. Using default completion mode.
>>>>> __ 

Re: Aw: Re: Shell commands are unavailable #1

2017-01-06 Thread Guillaume Nodet
It should be ok now.

2017-01-06 9:04 GMT+01:00 Guillaume Nodet :

> Thx, I pushed my fix.
> So the command is correctly discovered, however, the script never returns
> for some reason.  I'll investigate.
>
> 2017-01-06 8:59 GMT+01:00 Achim Nierbeck :
>
>> @Guillaume
>> look here: https://issues.apache.org/jira/browse/KARAF-4926
>>
>> :)
>>
>> regards, Achim
>>
>> 2017-01-06 8:33 GMT+01:00 Guillaume Nodet :
>>
>>> I found the problem and I have a fix locally.
>>> Has a jira been raised already ?
>>>
>>> 2017-01-06 7:39 GMT+01:00 Jean-Baptiste Onofré :
>>>
>>>> That's a good point and actually, it's the real end-user usage of shell
>>>> (to install the wrapper without starting a Karaf instance).
>>>>
>>>> Let me take a look.
>>>>
>>>> Thanks,
>>>> Regards
>>>> JB
>>>>
>>>>
>>>> On 01/06/2017 07:37 AM, Jens Offenbach wrote:
>>>>
>>>>> Ok, I can live with that, but the command "shell wrapper:install
>>>>> --start-type DEMAND_START" should work or has something been changed?
>>>>>
>>>>> Give it a try:
>>>>> $ shell wrapper:install --start-type DEMAND_START
>>>>> [org.apache.karaf.shell.impl.console.ConsoleSessionImpl] :
>>>>> completionMode property is not defined in etc/org.apache.karaf.shell.cfg
>>>>> file. Using default completion mode.
>>>>> [org.apache.karaf.shell.support.ShellUtil] : Unknown command entered
>>>>> org.apache.felix.gogo.runtime.CommandNotFoundException: Command not
>>>>> found: wrapper:install
>>>>> at org.apache.felix.gogo.runtime.
>>>>> Closure.executeCmd(Closure.java:549)
>>>>> at org.apache.felix.gogo.runtime.
>>>>> Closure.executeStatement(Closure.java:478)
>>>>> at org.apache.felix.gogo.runtime.
>>>>> Closure.execute(Closure.java:367)
>>>>> at org.apache.felix.gogo.runtime.Pipe.doCall(Pipe.java:417)
>>>>> at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:229)
>>>>> at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:59)
>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>> at java.util.concurrent.ThreadPoo
>>>>> lExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>>> at java.util.concurrent.ThreadPoo
>>>>> lExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>>> at java.lang.Thread.run(Thread.java:745)
>>>>> Command not found: wrapper:install
>>>>>
>>>>> Thanks!
>>>>>
>>>>>
>>>>> Gesendet: Freitag, 06. Januar 2017 um 07:32 Uhr
>>>>> Von: "Jean-Baptiste Onofré" 
>>>>> An: user@karaf.apache.org
>>>>> Betreff: Re: Aw: Shell commands are unavailable #1
>>>>> Hi Jens,
>>>>>
>>>>> shell doesn't really start a full Karaf instance, so, it's an expected
>>>>> behavior IMHO.
>>>>>
>>>>> Regards
>>>>> JB
>>>>>
>>>>> On 01/06/2017 06:50 AM, Jens Offenbach wrote:
>>>>>
>>>>>> The problem seems to occur only when using the "shell" script from
>>>>>> the "bin" folder. Using "./start", followed by "./client", the commands
>>>>>> "system:shutdown" and "logout" are working properly.
>>>>>>
>>>>>> Jens
>>>>>>
>>>>>>
>>>>>> Gesendet: Freitag, 06. Januar 2017 um 06:43 Uhr
>>>>>> Von: "Jens Offenbach" 
>>>>>> An: user@karaf.apache.org
>>>>>> Betreff: Shell commands are unavailable
>>>>>> Hi,
>>>>>> I am using Apache Karaf 4.1.0-Snapshot and in the current snapshot
>>>>>> release, there seems to be a problem regarding shell commands. I have
>>>>>> opened the following issue: https://issues.apache.org/jira
>>>>>> /browse/KARAF-4926. But things seem to be much more problematic.
>>>>>> Even the "logout" command is reported as unavailable.
>>>>>>
>>>>>> I am using the last snapshot (apache-karaf-4.1.0-20170105
>>>>&g

Re: Levels of Containerization - focus on Docker and Karaf

2017-01-12 Thread Guillaume Nodet
Fwiw, starting with Karaf 4.x, you can build custom distributions which are
mostly static, and that more closely map to micro-services / docker
images.  The "static" images are called this way because you they kinda
remove all the OSGi dynamism, i.e. no feature service, no deploy folder,
read-only config admin, all bundles being installed at startup time from
etc/startup.properties.
This can be easily done by using the karaf maven plugin and configuring
startupFeatures and referencing the static kar, as shown in:
  https://github.com/apache/karaf/blob/master/demos/profiles/static/pom.xml


2017-01-11 21:07 GMT+01:00 CodeCola :

> Not a question but a request for comments. With a focus on Java.
>
> Container technology has traditionally been messy with dependencies and no
> easy failsafe way until Docker came along to really pack ALL dependencies
> (including the JVM) together in one ready-to-ship image that was faster,
> more comfortable, and easier to understand than other container and code
> shipping methods out there. The spectrum from (Classical) Java EE
> Containers
> (e.g. Tomcat, Jetty) --> Java Application Servers that are containerized
> (Karaf, Wildfly, etc), Application Delivery Containers (Docker) and
> Virtualization (VMWare, Hyper-V) etc. offers a different level of isolation
> with different goals (abstraction, isolation and delivery).
>
> What are the choices, how should they play together, should they be used in
> conjunction with each other as they offer different kinds of
> Containerization?
>
> <http://karaf.922171.n3.nabble.com/file/n4049162/
> Levels_of_Containerization.png>
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.com/Levels-of-
> Containerization-focus-on-Docker-and-Karaf-tp4049162.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: Levels of Containerization - focus on Docker and Karaf

2017-01-12 Thread Guillaume Nodet
I fully agree.
However, I'm not sure if there are any pieces actually missing but some
documentation and advertisement ...
We could eventually add a bit of tooling, but I'm not even sure that's
really needed.

Guillaume

2017-01-12 16:46 GMT+01:00 Brad Johnson :

> Guillaume,
>
>
>
> I’d mentioned that in an early post as my preferred way to do
> microservices and perhaps a good way of doing a Karaf Boot. I’ve worked
> with the Karaf 4 profiles and they are great.  Also used your CDI OSGi
> service.  If we could use the Karaf 4 profiles with the CDI implementation
> with OSGi services and the Camel Java DSL as a standard stack, it would
> permit focused development and standardized bundle configurations.
>
>
>
> When I created a zip with Karaf, CXF and Camel the footprint was 30MB.
>
>
>
> While having Karaf Boot support DS, Blueprint, CDI, etc. I’m not sure
> that’s the healthiest move to encourage adoption.  We need less
> fragmentation in the OSGi world and not more.  Obviously even if Karaf Boot
> adopts one as the recommended standard it doesn’t mean that the others
> can’t be used.  When reading through Camel documentation on-line, for
> example, the confusion such fragmentation brings is obvious.  One becomes
> adept at converting Java DSL to blueprint or from blueprint to the Java DSL
> in one’s mind.
>
>
>
> The static profiles work great and will let us create a number of
> standardized appliances for a wide variety of topology concerns and not
> just for microservices.  A “switchboard” appliance, for example, might be
> used for orchestrating microservices and managing the APIs.  A “gateway”
> appliance might have standard JAAS, web service configuration and a routing
> mechanism for calling microservices.  An “AB” appliance could be used for
> 80/20 testing.  And so on.  Take the idea of enterprise integration
> patterns and bring it up to enterprise integration patterns using Karaf
> appliances.
>
>
>
> Many appliances might be “sealed”.  An appliance for AB testing, for
> example, would have configuration for two addresses in the configuration
> file and a percentage of traffic going to each.  Non need to actually
> program or re-program the internals any more than we’d usually re-program a
> Camel component.  But the source would be there if one wanted to create a
> new component or modify how an existing one functioned.
>
>
>
> I’d vote for using CDI along with the Camel Java DSL for a couple of
> reasons.  The first would be the standardization and portability of both
> code and skills.  Using CDI would mean that any Glassfish, JBoss, etc.
> developer would feel comfortable using the code.  Using Java Camel DSL
> would be for the same reason.  It would also give a programmer a sense that
> if they give Karaf Boot with the static profiles a shot, they aren’t locked
> in but can easily move to a different stack if necessary.  In a sense this
> is the same reason that Google chose Java language to run on the DVM. It
> tapped into a large existing skillbase instead of trying to get the world
> to adopt and learn a new language.  CDI with OSGi extensions also allows
> developers to use one paradigm for everything from lashing up internal
> dependency injections to working with OSGi services. I believe when you put
> that CDI extension out there you used blueprint style proxies under the
> cover.  As a developer using the CDI OSGi extension it was transparent to
> me.  If you later decided to rework that as a DS service, it would remain
> transparent and very much in the whole spirit of OSGi and its mechanisms
> for allowing refactoring and even rewriting without breaking the world.  It
> also makes unit testing a snap.  Any of us who have wrestled with Camel
> Blueprint Test Support can appreciate that.
>
>
>
> This would also permit for standardization of documentation and of Karaf
> Boot appliance project structures and Maven plug-in use.  A bit of
> convention over configuration.  Projects would have a standard
> configuration.cfg file that gets deployed via features to the pid.cfg. A
> standard features  file in the filtered folder. Those already exist, of
> course, but it isn’t as standardized as it could be.
>
>
>
> Personally I think this sort of goal with CDI, Karaf 4 and its profiles,
> and Camel Java DSL should be accelerated since Spring Boot is already out
> there.  Waiting for another couple of years to release this as a standard
> might be too late.
>
>
>
> The pieces are already there so it isn’t like we’d have to start from
> scratch. This would also play well with larger container concerns like
> Docker and Kubernettes.
>
>
>
> Brad
>
>
>
> *From:* Guillaume Nodet [ma

Re: karaf boot

2017-01-16 Thread Guillaume Nodet
2017-01-16 11:36 GMT+01:00 Christian Schneider :

> I generally like the idea of having one standard way to do dependency
> injection in OSGi. Unfortunately until now we do not have a single
> framework that most people are happy with.
>
> I pushed a lot to make blueprint easier by using the CDI and JEE
> annotations and create blueprint from it using the aries blueprint maven
> plugin. This allows a CDI style development and works very well already.
> Recently Dominik extended my approach a lot and covered much of the CDI
> functionality. Currently this might be the best approach when your
> developers are experienced in JEE. Unfortunately blueprint has some bad
> behaviours like the blocking proxies when a mandatory service goes away.
> Blueprint is also quite complex internally and there is not standardized
> API for extension namespaces.
>
> CDI would be great but it is is less well supported on OSGi than blueprint
> and the current implementations also have the same bad proxy behaviour. So
> while I would like to see a really good CDI implementation on OSGi with
> dynamic behaviour like DS we are not there yet.
>

No, the work I've done on CDI is free from those drawbacks.  It has the
same semantics as DS, so anything you can do in DS, you can do in CDI.  You
can even do more than DS because you can wire services "internally", i.e.
you don't have to expose your services to the OSGi registry to wire them
together.


>
> DS is a little limited with its lack of extensibility but it works by far
> best of all frameworks in OSGi. The way it creates and destroy components
> when mandatory references come and go makes it so easy to implement code
> that works well in the dynamic OSGi environment. It also nicely supports
> configs even when using the config factories where you can have one
> instance of your component per config instance.
>
> So for the moment I would rather use DS as a default dependency injection
> for karaf boot. It is also the smallest footprint. When CDI is ready we
> could switch to CDI.
>

I think it's ready.  The spec that the OSGi alliance is working on, is crap
imho, but I've raised my hand several times already, so I won't try to
bargain about all the limitations and creepy proxy things they want to do.
That said, the spec has 2 parts, the first one is about CDI applications in
OSGi, and that one is good.  The second one is a CDI extension for OSGi
service registry interaction, and that's the one that is bad, bad it's
pluggable, so we can easily use the Pax-CDI one and that will cause no
problems.

I think this CDI stuff has all the benefits of CDI + DS without the
drawbacks of blueprint, so I'd rather have us focusing on it.


>
>
> Christian
>
>
>
> On 11.01.2017 22:03, Brad Johnson wrote:
>
> I definitely like the direction of the Karaf Boot with the CDI, blueprint,
> DS, etc. starters.  Now if we could integrate that with the Karaf profiles
> and have standardized Karaf Boot containers to configure like tinkertoys
> we’d be there.  I may work on some of that. I believe the synergy between
> Karaf Boot and the profiles could be outstanding. It would make any
> development easier by using all the standard OSGi libraries and mak
> microservices a snap.
>
>
>
> If we have a workable CDI version of service/reference annotation then I’m
> not sure why I’d use DS. It may be that the external configuration of DS is
> more fleshed out but CDI has so much by way of easy injection that it makes
> coding and especially testing a lot easier.  I guess the CDI OSGi services
> could leverage much of DS.  Dunno.
>
>
>
> In any case, I think that’s on the right track.
>
>
>
> *From:* Christian Schneider [mailto:cschneider...@gmail.com
> ] *On Behalf Of *Christian Schneider
> *Sent:* Wednesday, January 11, 2017 8:52 AM
> *To:* user@karaf.apache.org
> *Subject:* Re: karaf boot
>
>
>
> Sounds like you have a good case to validate karaf boot on.
>
> Can you explain how you create your deployments now and what you are
> missing in current karaf? Until now we only discussed internally about the
> scope and requirements of karaf boot. It would be very valuable to get some
> input from a real world case.
>
> Christian
>
> On 11.01.2017 13:41, Nick Baker wrote:
>
> We'd be interested in this as well. Beginning to move toward Microservices
> deployments + Remote Services for interop. I'll have a look at your branch
> JB!
>
>
>
> We've added support in our Karaf main for multiple instances from the same
> install on disk. Cache directories segmented, port conflicts handled. This
> of course isn't an issue in container-based cloud deployments (Docker).
> Still, may be of 

Re: karaf boot

2017-01-16 Thread Guillaume Nodet
2017-01-16 13:25 GMT+01:00 Guillaume Nodet :

>
>
> 2017-01-16 11:36 GMT+01:00 Christian Schneider :
>
>> I generally like the idea of having one standard way to do dependency
>> injection in OSGi. Unfortunately until now we do not have a single
>> framework that most people are happy with.
>>
>> I pushed a lot to make blueprint easier by using the CDI and JEE
>> annotations and create blueprint from it using the aries blueprint maven
>> plugin. This allows a CDI style development and works very well already.
>> Recently Dominik extended my approach a lot and covered much of the CDI
>> functionality. Currently this might be the best approach when your
>> developers are experienced in JEE. Unfortunately blueprint has some bad
>> behaviours like the blocking proxies when a mandatory service goes away.
>> Blueprint is also quite complex internally and there is not standardized
>> API for extension namespaces.
>>
>> CDI would be great but it is is less well supported on OSGi than
>> blueprint and the current implementations also have the same bad proxy
>> behaviour. So while I would like to see a really good CDI implementation on
>> OSGi with dynamic behaviour like DS we are not there yet.
>>
>
> No, the work I've done on CDI is free from those drawbacks.  It has the
> same semantics as DS, so anything you can do in DS, you can do in CDI.  You
> can even do more than DS because you can wire services "internally", i.e.
> you don't have to expose your services to the OSGi registry to wire them
> together.
>
>
>>
>> DS is a little limited with its lack of extensibility but it works by far
>> best of all frameworks in OSGi. The way it creates and destroy components
>> when mandatory references come and go makes it so easy to implement code
>> that works well in the dynamic OSGi environment. It also nicely supports
>> configs even when using the config factories where you can have one
>> instance of your component per config instance.
>>
>> So for the moment I would rather use DS as a default dependency injection
>> for karaf boot. It is also the smallest footprint. When CDI is ready we
>> could switch to CDI.
>>
>
> I think it's ready.  The spec that the OSGi alliance is working on, is
> crap imho, but I've raised my hand several times already, so I won't try to
> bargain about all the limitations and creepy proxy things they want to do.
> That said, the spec has 2 parts, the first one is about CDI applications in
> OSGi, and that one is good.  The second one is a CDI extension for OSGi
> service registry interaction, and that's the one that is bad, bad it's
> pluggable, so we can easily use the Pax-CDI one and that will cause no
> problems.
>

Read "but it's pluggable".


>
> I think this CDI stuff has all the benefits of CDI + DS without the
> drawbacks of blueprint, so I'd rather have us focusing on it.
>
>
>>
>>
>> Christian
>>
>>
>>
>> On 11.01.2017 22:03, Brad Johnson wrote:
>>
>> I definitely like the direction of the Karaf Boot with the CDI,
>> blueprint, DS, etc. starters.  Now if we could integrate that with the
>> Karaf profiles and have standardized Karaf Boot containers to configure
>> like tinkertoys we’d be there.  I may work on some of that. I believe the
>> synergy between Karaf Boot and the profiles could be outstanding. It would
>> make any development easier by using all the standard OSGi libraries and
>> mak microservices a snap.
>>
>>
>>
>> If we have a workable CDI version of service/reference annotation then
>> I’m not sure why I’d use DS. It may be that the external configuration of
>> DS is more fleshed out but CDI has so much by way of easy injection that it
>> makes coding and especially testing a lot easier.  I guess the CDI OSGi
>> services could leverage much of DS.  Dunno.
>>
>>
>>
>> In any case, I think that’s on the right track.
>>
>>
>>
>> *From:* Christian Schneider [mailto:cschneider...@gmail.com
>> ] *On Behalf Of *Christian Schneider
>> *Sent:* Wednesday, January 11, 2017 8:52 AM
>> *To:* user@karaf.apache.org
>> *Subject:* Re: karaf boot
>>
>>
>>
>> Sounds like you have a good case to validate karaf boot on.
>>
>> Can you explain how you create your deployments now and what you are
>> missing in current karaf? Until now we only discussed internally about the
>> scope and requirements of karaf boot. It would be very valuable to get some
>> input from a real world case.
>>
>> C

Re: karaf boot

2017-01-16 Thread Guillaume Nodet
more fleshed out but CDI has so much by way of easy injection that it makes
> coding and especially testing a lot easier.  I guess the CDI OSGi services
> could leverage much of DS.  Dunno.
>
>
>
> In any case, I think that’s on the right track.
>
>
>
> *From:* Christian Schneider [mailto:cschneider...@gmail.com
> ] *On Behalf Of *Christian Schneider
> *Sent:* Wednesday, January 11, 2017 8:52 AM
> *To:* user@karaf.apache.org
> *Subject:* Re: karaf boot
>
>
>
> Sounds like you have a good case to validate karaf boot on.
>
> Can you explain how you create your deployments now and what you are
> missing in current karaf? Until now we only discussed internally about the
> scope and requirements of karaf boot. It would be very valuable to get some
> input from a real world case.
>
> Christian
>
> On 11.01.2017 13:41, Nick Baker wrote:
>
> We'd be interested in this as well. Beginning to move toward Microservices
> deployments + Remote Services for interop. I'll have a look at your branch
> JB!
>
>
>
> We've added support in our Karaf main for multiple instances from the same
> install on disk. Cache directories segmented, port conflicts handled. This
> of course isn't an issue in container-based cloud deployments (Docker).
> Still, may be of use.
>
>
>
> -Nick Baker
>
>
>
>
> --
>
> Christian Schneider
>
> http://www.liquid-reality.de
>
>
>
> Open Source Architect
>
> http://www.talend.com
>
>
>
>
>
> --
>
> Christian Schneider
>
> http://www.liquid-reality.de
>
>
>
> Open Source Architect
>
> http://www.talend.com
>
>


-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: watch bundles by default

2017-01-18 Thread Guillaume Nodet
Have you tried the following ?
 > bundle:watch *


2017-01-18 10:36 GMT+01:00 Toni Menzel :

> Hi,
>
> is there a simple configuration to do "bundle:watch " by
> default? Either all or (better) using a filter expression on BSN?
>
> *Toni Menzel*
>
>
> *www.rebaze.de <http://www.rebaze.de/> | www.rebaze.com
> <http://www.rebaze.com/> | @rebazeio <https://twitter.com/rebazeio>*
>



-- 

Guillaume Nodet

Red Hat, Open Source Integration

Email: gno...@redhat.com
Web: http://fusesource.com
Blog: http://gnodet.blogspot.com/


Re: Karaf: feature satisfy requirements

2017-02-05 Thread Guillaume Nodet
You can modify the etc/config.properties file, in particular the
  org.osgi.framework.system.capabilities
configuration.  If some service are provided by default by the framework
and are missing,
you may want to raise a JIRA issue and provide a patch / pull request.

2017-02-05 12:05 GMT+01:00 Markus Rathgeb :

> Ah, okay, I assume it is caused by the missing
> Provide-Capability of the Equinox bundle that it provides that service.
> Could this information be added by some configuration file without
> modify the Equinox bundle itself?
>
> 2017-02-05 11:49 GMT+01:00 Markus Rathgeb :
> > Hello,
> >
> > I thought I understand how the dependency flag is working for features
> > and bundles, but at least it seems to be different.
> > Perhaps someone could explain me the following scenario:
> >
> > Feature file:
> > ===
> > 
> >  > xmlns="http://karaf.apache.org/xmlns/features/v1.4.0";>
> >
> >   
> > osgi.service;filter:="(objectClass=javax.
> xml.parsers.SAXParserFactory)";effective:=active
> > jboss-xerces
> >   
> >
> >> version="1.0-SNAPSHOT">
> > OSGi service for 'javax.xml.parsers.
> SAXParserFactory'
> >
> > 
> > 
> > https://repository.jboss.org/nexus/
> content/repositories/releases/org/jboss/osgi/xerces/jbosgi-
> xerces/3.1.0.Final/jbosgi-xerces-3.1.0.Final.jar
> > osgi.service;objectClass=javax.xml.parsers.
> SAXParserFactory
> >
> >  > start-level="30">mvn:org.osgi/org.osgi.util.xml/1.0.1
> >  > start-level="8">mvn:org.ops4j.pax.logging/pax-logging-api/1.9.1
> >   
> >
> > 
> > ===
> >
> > I would expect if I install the feature "saxparserfactory" the feature
> > jboss-xerces is installed only if the requirements are not already
> > fulfilled.
> > The only requirement the feature drops in should be to ensure that
> > there is a SAXParserFactory service available.
> > Should this be possible?
> >
> >
> > Test
> > ===
> >
> > I used the current Karaf 4.1.0 form the second voting round.
> >
> > Start a clean instance:
> > $ bin/karaf clean
> >
> > As expected Apache Felix is the used OSGi framework
> >
> > karaf@root()> bundle:list -t 0 -s 0 | grep Active
> >  0 │ Active │   0 │ 5.6.1   │ org.apache.felix.framework
> >
> > I added the feature repository file.
> >
> > karaf@root()> feature:repo-add
> > file:///home/maggu2810/tmp/saxparserfactory-feature.xml
> > Adding feature url file:///home/maggu2810/tmp/
> saxparserfactory-feature.xml
> >
> > After that let's look if there is already a SAXParserFactory present:
> >
> > karaf@root()> service:list javax.xml.parsers.SAXParserFactory
> >
> > No output, so as expected, no service found.
> >
> > Check what will be done if the feature is installed:
> >
> > karaf@root()> feature:install -t -v saxparserfactory
> > Adding features: saxparserfactory/[1.0.0.SNAPSHOT,1.0.0.SNAPSHOT]
> > Changes to perform:
> >   Region: root
> > Bundles to install:
> >   https://repository.jboss.org/nexus/content/repositories/
> releases/org/jboss/osgi/xerces/jbosgi-xerces/3.1.0.
> Final/jbosgi-xerces-3.1.0.Final.jar
> >   mvn:org.osgi/org.osgi.util.xml/1.0.1
> >
> > This is as I expect what should be done.
> >
> > Let's exit the Karaf container.
> >
> > karaf@root()> shutdown -f
> >
> >
> >
> > After that I used the Equinox framework
> >
> > $ echo 'karaf.framework=equinox' >> etc/custom.properties
> >
> > and started again a clean isntance.
> >
> > $ bin/karaf clean
> >
> > Ensure the feature repository is available:
> >
> > karaf@root()> feature:repo-add
> > file:///home/maggu2810/tmp/saxparserfactory-feature.xml
> > Adding feature url file:///home/maggu2810/tmp/
> saxparserfactory-feature.xml
> >
> > The Equinox OSGi framework bundle already provides a SAXParserFactory
> service.
> >
> > karaf@root()> service:list javax.xml.parsers.SAXParserFactory
> > [javax.xml.parsers.SAXParserFactory]
> > 
> >  service.pid = 0.org.eclipse.osgi.internal.framework.
> XMLParsingServiceFactory
> >  service.vendor = Eclipse.org - Equinox
> >  service.id = 20
> >  service.bundleid = 0
> >  service.scope = bundle
> > Provided by :
> >  OSGi System Bundle (0)
> >
> > Now I would expect that the installation of the saxparserfactory
> > feature will not trigger an installation of the jboss-xerces feature,
> > because all requirements should be already satisfied.
> > But it looks like:
> >
> > karaf@root()> feature:install -t -v saxparserfactory
> > Adding features: saxparserfactory/[1.0.0.SNAPSHOT,1.0.0.SNAPSHOT]
> > Changes to perform:
> >   Region: root
> > Bundles to install:
> >   https://repository.jboss.org/nexus/content/repositories/
> releases/org/jboss/osgi/xerces/jbosgi-xerces/3.1.0.
> Final/jbosgi-xerces-3.1.0.Final.jar
> >   mvn:org.osgi/org.osgi.util.xml/1.0.1
> >
> > What am I doing wrong?
> >
> > Best regards,
> > Markus Rathgeb
>



-- 

Guillaume Nodet


Re: Karaf: feature satisfy requirements

2017-02-06 Thread Guillaume Nodet
The best option is to:
  * fix the system bundle capabilities
  * use a dependency feature as you did

To fix the system bundle capabilities, depending on the framework used, you
can do something along the following:

org.osgi.framework.system.capabilities=\
   ...\
   ${${framework}-capabilities}

felix-capabilities=

equinox-capabilities=\
  osgi.service;effective:=active;objectClass=javax.xml.parsers.
SAXParserFactory

I haven't tested the above, but hopefully you'll get the idea.  It's
similar to what's done for the ${jre-${java.specification.version}}...

2017-02-06 9:16 GMT+01:00 Markus Rathgeb :

> Hi Guillaume,
> thank you for your reply.
>
> In this special case it is about an implementation for
> "javax.xml.parsers.SAXParserFactory" and a service for that.
> The bundle "org.eclipse.equinox.registry" (I need to use to satisfy
> third party dependencies) is using a tracker to find an implementation
> and prints errors to stdout if there is no one.
> To prevent that messages on every Karaf start I would like to have
> such a service in the Karaf instance.
>
> The Equinox OSGi framework provides that service itself. Without any
> provide capability in its manifest.
> The Apache Felix OSGi framework doesn't provide such a service.
>
> So, what are the options:
>
> add it to "org.osgi.framework.system.capabilities"
> IMHO this will be not correct, because the service is present if
> Equinox is used and not present if Felix is used.
>
> Use "conditionals" in a feature to install another bundle that
> provides a "javax.xml.parsers.SAXParserFactory" service if Felix is
> used (and so the conditional will skip the installation if Equinox is
> used).
> If my reading has been correct, conditionals could be used for
> installed features and not for the special OSGi framework
> implementation.
>
> I can install another bundle / feature that provides that service
> implementation for both frameworks.
> This is okay, but if it is not necessary...
>
> So, is there a simple method to leave it up to the user to switch
> between "karaf.framework" Felix or Equinox and handle that in some
> way, so another implementation is installed only if not Equinox is
> choosen?
>
> Best regards,
> Markus
>
> 2017-02-06 8:45 GMT+01:00 Guillaume Nodet :
> > You can modify the etc/config.properties file, in particular the
> >   org.osgi.framework.system.capabilities
> > configuration.  If some service are provided by default by the framework
> and
> > are missing,
> > you may want to raise a JIRA issue and provide a patch / pull request.
> >
> > 2017-02-05 12:05 GMT+01:00 Markus Rathgeb :
> >>
> >> Ah, okay, I assume it is caused by the missing
> >> Provide-Capability of the Equinox bundle that it provides that service.
> >> Could this information be added by some configuration file without
> >> modify the Equinox bundle itself?
> >>
> >> 2017-02-05 11:49 GMT+01:00 Markus Rathgeb :
> >> > Hello,
> >> >
> >> > I thought I understand how the dependency flag is working for features
> >> > and bundles, but at least it seems to be different.
> >> > Perhaps someone could explain me the following scenario:
> >> >
> >> > Feature file:
> >> > ===
> >> > 
> >> >  >> > xmlns="http://karaf.apache.org/xmlns/features/v1.4.0";>
> >> >
> >> >   
> >> >
> >> > osgi.service;filter:="(objectClass=javax.
> xml.parsers.SAXParserFactory)";effective:=active
> >> > jboss-xerces
> >> >   
> >> >
> >> >>> > version="1.0-SNAPSHOT">
> >> > OSGi service for
> >> > 'javax.xml.parsers.SAXParserFactory'
> >> >
> >> > 
> >> > 
> >> >  >> > start-level="30">https://repository.jboss.org/nexus/
> content/repositories/releases/org/jboss/osgi/xerces/jbosgi-
> xerces/3.1.0.Final/jbosgi-xerces-3.1.0.Final.jar
> >> >
> >> > osgi.service;objectClass=javax.xml.parsers.
> SAXParserFactory
> >> >
> >> >  >> > start-level="30">mvn:org.osgi/org.osgi.util.xml/1.0.1
> >> >  >> > start-level="8">mvn:org.ops4j.pax.logging/pax-logging-api/1.
> 9.1
> >> >   
> >> >
> >> > 
> >> > ===
> >> >
> >> > I would expect if I install the feature "saxparserfactory" the feature
> 

Re: Karaf: feature satisfy requirements

2017-02-06 Thread Guillaume Nodet
2017-02-06 9:49 GMT+01:00 Markus Rathgeb :

> That looks good, thanks a lot.
>
> Do you think this is something others are interested in?
> Should I create a Jira + PR if it is working?
>

Sure, but it would be interesting to do a complete pass on the services
provided by each framework for completeness.


>
> 2017-02-06 9:21 GMT+01:00 Guillaume Nodet :
> > The best option is to:
> >   * fix the system bundle capabilities
> >   * use a dependency feature as you did
> >
> > To fix the system bundle capabilities, depending on the framework used,
> you
> > can do something along the following:
> >
> > org.osgi.framework.system.capabilities=\
> >...\
> >${${framework}-capabilities}
> >
> > felix-capabilities=
> >
> > equinox-capabilities=\
> >
> > osgi.service;effective:=active;objectClass=javax.xml.
> parsers.SAXParserFactory
> >
> > I haven't tested the above, but hopefully you'll get the idea.  It's
> similar
> > to what's done for the ${jre-${java.specification.version}}...
> >
> > 2017-02-06 9:16 GMT+01:00 Markus Rathgeb :
> >>
> >> Hi Guillaume,
> >> thank you for your reply.
> >>
> >> In this special case it is about an implementation for
> >> "javax.xml.parsers.SAXParserFactory" and a service for that.
> >> The bundle "org.eclipse.equinox.registry" (I need to use to satisfy
> >> third party dependencies) is using a tracker to find an implementation
> >> and prints errors to stdout if there is no one.
> >> To prevent that messages on every Karaf start I would like to have
> >> such a service in the Karaf instance.
> >>
> >> The Equinox OSGi framework provides that service itself. Without any
> >> provide capability in its manifest.
> >> The Apache Felix OSGi framework doesn't provide such a service.
> >>
> >> So, what are the options:
> >>
> >> add it to "org.osgi.framework.system.capabilities"
> >> IMHO this will be not correct, because the service is present if
> >> Equinox is used and not present if Felix is used.
> >>
> >> Use "conditionals" in a feature to install another bundle that
> >> provides a "javax.xml.parsers.SAXParserFactory" service if Felix is
> >> used (and so the conditional will skip the installation if Equinox is
> >> used).
> >> If my reading has been correct, conditionals could be used for
> >> installed features and not for the special OSGi framework
> >> implementation.
> >>
> >> I can install another bundle / feature that provides that service
> >> implementation for both frameworks.
> >> This is okay, but if it is not necessary...
> >>
> >> So, is there a simple method to leave it up to the user to switch
> >> between "karaf.framework" Felix or Equinox and handle that in some
> >> way, so another implementation is installed only if not Equinox is
> >> choosen?
> >>
> >> Best regards,
> >> Markus
> >>
> >> 2017-02-06 8:45 GMT+01:00 Guillaume Nodet :
> >> > You can modify the etc/config.properties file, in particular the
> >> >   org.osgi.framework.system.capabilities
> >> > configuration.  If some service are provided by default by the
> framework
> >> > and
> >> > are missing,
> >> > you may want to raise a JIRA issue and provide a patch / pull request.
> >> >
> >> > 2017-02-05 12:05 GMT+01:00 Markus Rathgeb :
> >> >>
> >> >> Ah, okay, I assume it is caused by the missing
> >> >> Provide-Capability of the Equinox bundle that it provides that
> service.
> >> >> Could this information be added by some configuration file without
> >> >> modify the Equinox bundle itself?
> >> >>
> >> >> 2017-02-05 11:49 GMT+01:00 Markus Rathgeb :
> >> >> > Hello,
> >> >> >
> >> >> > I thought I understand how the dependency flag is working for
> >> >> > features
> >> >> > and bundles, but at least it seems to be different.
> >> >> > Perhaps someone could explain me the following scenario:
> >> >> >
> >> >> > Feature file:
> >> >> > ===
> >> >> > 
> >> >> >  >> >> > xmlns="http://karaf.apache.org/xmlns/features/v1.4.0";>
> >> >>

Re: shell:date wrong time

2017-03-02 Thread Guillaume Nodet
Could you raise a JIRA issue for that please ?

2017-03-02 10:45 GMT+01:00 sion-zenit :

> Hi! In karaf client I execute command shell:date and it return a time
> different for an hour.
>
> Client output
> karaf@trun()> version
> 4.0.5
> karaf@trun()> shell:date
> Tue Feb 28 12:44:08 MSK 2017
>
> System output
> $ date
> Втр Фев 28 11:44:38 MSK 2017
> OS Linux MYSRV 2.6.32-573.22.1.el6.x86_64
> How solve this problem?
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/shell-date-wrong-time-tp4049693p4049710.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: shell:date wrong time

2017-03-02 Thread Guillaume Nodet
The problem is that we use
   new Date()
which does not support time zone well. Instead, we should use
  new GregorianCalendar()
or
  java.time.LocalDateTime.now()

That's for the timezone.
For the language, I guess it comes from the formatter used, we don't use
the locale afaik:

https://github.com/apache/felix/blob/trunk/gogo/jline/src/main/java/org/apache/felix/gogo/jline/Posix.java#L264

2017-03-02 14:48 GMT+01:00 Jean-Baptiste Onofré :

> By the way, it works fine on my box.
>
> Gonna investigate.
>
> Regards
> JB
>
>
> On 03/02/2017 02:40 PM, sion-zenit wrote:
>
>> Guillaume Nodet-2, thanks for the answer.
>> https://issues.apache.org/jira/browse/KARAF-5005
>>
>>
>>
>> --
>> View this message in context: http://karaf.922171.n3.nabble.
>> com/shell-date-wrong-time-tp4049693p4049714.html
>> Sent from the Karaf - User mailing list archive at Nabble.com.
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



-- 

Guillaume Nodet


Re: shell:date wrong time

2017-03-02 Thread Guillaume Nodet
Could you run the same with Karaf 4.1.0 please ?

2017-03-02 10:45 GMT+01:00 sion-zenit :

> Hi! In karaf client I execute command shell:date and it return a time
> different for an hour.
>
> Client output
> karaf@trun()> version
> 4.0.5
> karaf@trun()> shell:date
> Tue Feb 28 12:44:08 MSK 2017
>
> System output
> $ date
> Втр Фев 28 11:44:38 MSK 2017
> OS Linux MYSRV 2.6.32-573.22.1.el6.x86_64
> How solve this problem?
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/shell-date-wrong-time-tp4049693p4049710.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: Gemini Blueprint on Karaf 4.0.8

2017-03-13 Thread Guillaume Nodet
Can you run the installation using:
  feature:install --simulate --verbose gemini-blueprint
This should give more information about the exact reason why the bundles
are refreshed.

2017-03-12 21:12 GMT+01:00 Ygor Castor :

>  Since Spring DM is long dead and it's quite a buggy way to wire spring on
> a OSGi context i've decided to give Gemini a chance, since it's actively
> developed, first of all i've updated the feature to:
>
> http://pastebin.com/4sAzyxGg
>
> And it seems to work ok, but with one problem, when i install any feature
> the whole container goes awry, refreshing even core bundles, here is the
> log:
>
> http://pastebin.com/t8h2FTpt
>
> I've had the same problem before with Fragment bundles, i've created a
> Liquibase Extension fragment, and everytime that fragment got refreshed the
> whole container became a mess.
>
> Right now, to "solve" the problem i'm using a feature:install/uninstall -r
> , which disables feature bundles auto-refresh, and refreshing manualy using
> bundle:refresh, doing that there is no problem, and i'm able to use gemini
> and the fragments.
>
> I'm not sure this is a normal behavior, can anyone confirm?
>



-- 

Guillaume Nodet


Re: Camel on Karaf looking to resolve http://camel.apache.org/schema/spring/v${camel.schema.version}

2017-03-13 Thread Guillaume Nodet
IIRC, camel blueprint translated the xml in the camel-spring namespace to
be able to leverage jaxb, but clearly a property should have been replaced
by the actual camel version.
Can you explain exactly which bundles / features you installed  (with
versions) and the blueprint xml you use ?

2017-03-12 14:14 GMT+01:00 Niels Bertram :

> Hi there,
>
> I am trying to implement a very basic CXF / Camel REST service. And
> despite best efforts I am getting below stacktrace when installing by
> bundle.
>
> Disturbing part is, I did not ask for spring, I want a plain blueprint
> instance.
>
> Anyone has seen this before?
>
>
> 2017-03-12 23:00:39,527 | WARN  | pool-32-thread-2 |
> NamespaceHandlerRegistryImpl | 12 - org.apache.aries.blueprint.core -
> 1.7.1 | Error registering NamespaceHandler
> java.lang.IllegalArgumentException: Illegal character in path at index
> 40: http://camel.apache.org/schema/spring/v${camel.schema.version}
> at java.net.URI.create(URI.java:852) [?:?]
> at org.apache.aries.blueprint.namespace.
> NamespaceHandlerRegistryImpl.getNamespaces(NamespaceHandlerRegistryImpl.java:203)
> [12:org.apache.aries.blueprint.core:1.7.1]
> at org.apache.aries.blueprint.namespace.
> NamespaceHandlerRegistryImpl.registerHandler(NamespaceHandlerRegistryImpl.java:157)
> [12:org.apache.aries.blueprint.core:1.7.1]
> at org.apache.aries.blueprint.namespace.
> NamespaceHandlerRegistryImpl.addingService(NamespaceHandlerRegistryImpl.java:121)
> [12:org.apache.aries.blueprint.core:1.7.1]
> at 
> org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:941)
> [?:?]
> at 
> org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:870)
> [?:?]
> at 
> org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:256)
> [?:?]
> at 
> org.osgi.util.tracker.AbstractTracked.track(AbstractTracked.java:229)
> [?:?]
> at org.osgi.util.tracker.ServiceTracker$Tracked.
> serviceChanged(ServiceTracker.java:901) [?:?]
> at org.apache.felix.framework.EventDispatcher.
> invokeServiceListenerCallback(EventDispatcher.java:990) [?:?]
> at org.apache.felix.framework.EventDispatcher.
> fireEventImmediately(EventDispatcher.java:838) [?:?]
> at 
> org.apache.felix.framework.EventDispatcher.fireServiceEvent(EventDispatcher.java:545)
> [?:?]
> at org.apache.felix.framework.Felix.fireServiceEvent(Felix.java:4557)
> [?:?]
> at org.apache.felix.framework.Felix.registerService(Felix.java:3549)
> [?:?]
> at 
> org.apache.felix.framework.BundleContextImpl.registerService(BundleContextImpl.java:348)
> [?:?]
> at 
> org.apache.felix.framework.BundleContextImpl.registerService(BundleContextImpl.java:355)
> [?:?]
> at 
> org.apache.aries.blueprint.spring.SpringExtension.start(SpringExtension.java:78)
> [202:org.apache.aries.blueprint.spring:0.2.0]
> at 
> org.apache.felix.utils.extender.AbstractExtender$1.run(AbstractExtender.java:265)
> [202:org.apache.aries.blueprint.spring:0.2.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [?:?]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
> at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:?]
> at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [?:?]
> at java.lang.Thread.run(Thread.java:745) [?:?]
> Caused by: java.net.URISyntaxException: Illegal character in path at index
> 40: http://camel.apache.org/schema/spring/v${camel.schema.version}
> at java.net.URI$Parser.fail(URI.java:2848) ~[?:?]
> at java.net.URI$Parser.checkChars(URI.java:3021) ~[?:?]
> at java.net.URI$Parser.parseHierarchical(URI.java:3105) ~[?:?]
> at java.net.URI$Parser.parse(URI.java:3053) ~[?:?]
> at java.net.URI.(URI.java:588) ~[?:?]
> at java.net.URI.create(URI.java:850) ~[?:?]
> ... 24 more
>



-- 

Guillaume Nodet


Fwd: Gemini Blueprint on Karaf 4.0.8

2017-03-13 Thread Guillaume Nodet
4.0.8 which is being refreshed)
>
>
>
> 
> Can you run the installation using:
>   feature:install --simulate --verbose gemini-blueprint
> This should give more information about the exact reason why the bundles
> are refreshed.
>
> 2017-03-12 21:12 GMT+01:00 Ygor Castor :
>
> >  Since Spring DM is long dead and it's quite a buggy way to wire spring
> on
> > a OSGi context i've decided to give Gemini a chance, since it's actively
> > developed, first of all i've updated the feature to:
> >
> > http://pastebin.com/4sAzyxGg
> >
> > And it seems to work ok, but with one problem, when i install any feature
> > the whole container goes awry, refreshing even core bundles, here is the
> > log:
> >
> > http://pastebin.com/t8h2FTpt
> >
> > I've had the same problem before with Fragment bundles, i've created a
> > Liquibase Extension fragment, and everytime that fragment got refreshed
> > the
> > whole container became a mess.
> >
> > Right now, to "solve" the problem i'm using a feature:install/uninstall
> -r
> > , which disables feature bundles auto-refresh, and refreshing manualy
> > using
> > bundle:refresh, doing that there is no problem, and i'm able to use
> gemini
> > and the fragments.
> >
> > I'm not sure this is a normal behavior, can anyone confirm?
> >
>
>
>
> --
> 
> Guillaume Nodet
>
> 
> Quoted from:
> http://karaf.922171.n3.nabble.com/Gemini-Blueprint-on-Karaf-
> 4-0-8-tp4049831p4049833.html
>
>
> _
> Sent from http://karaf.922171.n3.nabble.com
>
>


-- 

Guillaume Nodet




-- 

Guillaume Nodet


Re: Odd behaviour with bin/client

2017-03-29 Thread Guillaume Nodet
This is caused by https://issues.apache.org/jira/browse/SSHD-732
I've also added a workaround in Karaf which should be in 4.1.1:

https://github.com/apache/karaf/blob/master/client/src/main/java/org/apache/karaf/client/Main.java#L154

2017-03-29 16:35 GMT+02:00 Anders :

> Hello!
>
> I am trying Karaf 4.1.0. All things below is done on an unmodified
> directory
> structure after extracting apache-karaf-4.1.0.tar.gz. I am starting the
> software on a Fedora 25 64bit Oracle JDK 8 (tried as well OpenJDK 8)
> running
> in VirtualBox, using the command bin/start.
>
> Karaf 4.1.0 starts fine. However when I try to connect to the shell using
> bin/client it continuously asks me for the password:
>
> $ bin/client
> client: JAVA_HOME not set; results may vary
> Logging in as karaf
> Password:
> Password:
> No more authentication methods available
>
> I tried to set JAVA_HOME variable but that just removed the warning. I can
> login with ssh:
>
> $ ssh -p 8101 karaf@localhost
> Password authentication
> Password:
> __ __  
>/ //_/ __ _/ __/
>   / ,<  / __ `/ ___/ __ `/ /_
>  / /| |/ /_/ / /  / /_/ / __/
> /_/ |_|\__,_/_/   \__,_/_/
>
>   Apache Karaf (4.1.0)
>
> Hit '' for a list of available commands
> and '[cmd] --help' for help on a specific command.
> Hit 'system:shutdown' to shutdown Karaf.
> Hit '' or type 'logout' to disconnect shell from current session.
>
> karaf@root()>
>
> However ssh is unusable as that somehow adds way to many tabs and spaces
> (like pasting indented code to vim with wrong settings)...it is unreadable.
>
> The next odd thing is:
> I tried Karaf 4.0.8 and that works! Not only does it works, it can connect
> to Karaf 4.1.0 no problems. It does not ask for a password:
>
> $ bin/client
> client: JAVA_HOME not set; results may vary
> Logging in as karaf
> __ __  
>/ //_/ __ _/ __/
>   / ,<  / __ `/ ___/ __ `/ /_
>  / /| |/ /_/ / /  / /_/ / __/
> /_/ |_|\__,_/_/   \__,_/_/
>
>   Apache Karaf (4.1.0)
>
> Hit '' for a list of available commands
> and '[cmd] --help' for help on a specific command.
> Hit 'system:shutdown' to shutdown Karaf.
> Hit '' or type 'logout' to disconnect shell from current session.
>
> karaf@root()>
>
> The output is correctly formatted here and is usable (except I have to
> press
> Ctrl-j to run commands, as it is running in a hard core Emacs setting,
> Enter
> is not responding).
>
> Lastly I tried bin/shell. From there I could access ssh:ssh. I got the
> output in this pastebin: https://pastebin.com/2wg0rv3B. I did two
> attempts,
> one to default port 22 (not asking for password) and 8101 (asked twice for
> password). You can see the second attempt about halfway down. I pressed
> ENTER a few times on the prompt so it is easier to find.
>
> I watched data/log/karaf.log. It does add this line (before typing
> password,
> but after I run bin/client):
> 2017-03-29T16:31:37,069 | WARN  | sshd-SshServer[279691af]-nio2-thread-1 |
> VersionProperties$LazyHolder | 48 - org.apache.sshd.core - 1.2.0 |
> Failed (FileNotFoundException) to load version properties: Resource does
> not
> exists
>
> That message appears only once, NOT once per attempt. Once per restart. And
> as said, before I type the password.
>
> Do anyone have any clue what might be the problem? I would be really happy
> to hear what it might be!
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Odd-behaviour-with-bin-client-tp4049962.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: KARAF-4829 Make sure configFile in features makes config available early

2017-03-30 Thread Guillaume Nodet
By default, the plugin will write all configurations from startup and boot
stages.  For boot features, it can be further refined using the
pidsToExtract  parameter on the assembly mojo.  For example the default
karaf assembly does not extract configs for acls.
In all cases, whether the configuration is extract by the FeaturesService
at boot stage or not, the bundles should be started after the config is
installed.

So the default should be to extract all the cfg files.

Guillaume

2017-03-30 17:17 GMT+02:00 CLEMENT Jean-Philippe <
jean-philippe.clem...@fr.thalesgroup.com>:

> We build our assembly with the plugin you mentioned. But the assembly
> itself, the .tar.gz does not contains the cfg files in the /etc. These
> files are populated in the /etc during the first startup. The problem is
> that the bundles are also started in parallel during the same time. So
> using them may fail.
>
> Is there an option to force the karaf-maven-plugin to put the cfg files in
> the /etc folder inside the .tar.gz archive?
>
> Regards,
> JP
>
> -Message d'origine-
> De : Jean-Baptiste Onofré [mailto:j...@nanthrax.net]
> Envoyé : mercredi 29 mars 2017 19:14
> À : user@karaf.apache.org
> Objet : Re: KARAF-4829 Make sure configFile in features makes config
> available early
>
> Hi JP,
>
> yes, you can create your custom distribution with resource: the
> karaf-maven-plugin will include it in the final archive.
>
> Regards
> JB
>
> On 03/29/2017 07:02 PM, CLEMENT Jean-Philippe wrote:
> > Hi,
> >
> > About the issue KARAF-4829 about config files which may be available
> after bundles start, I'm looking for a workaround.
> >
> > Is there a way to force cfg files to be copied to /etc during assembly?
> >
> > Thanks!
> >
> > Regards,
> > JP
> >
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



-- 

Guillaume Nodet


Re: Karaf & feature.xml files

2017-04-03 Thread Guillaume Nodet
The addUrl command has been renamed to repo-add.
So they are the same command.

To add a file, you may need to use 3 slashes for a file.  The following
command works for me:

feature:repo-add
file:///Users/gnodet/work/git/karaf4x/assemblies/apache-karaf-minimal/target/assembly/system/org/apache/karaf/features/spring/4.1.2-SNAPSHOT/spring-4.1.2-SNAPSHOT-features.xml

2017-04-03 0:42 GMT+02:00 smunro :

> Hello,
>
> I'm trying to put together a guide on installing multiple osgi bundles for
> my team and one solution I would like to explore is using the features.xml
> file.
>
> All the guides on-line have made reference to a features:addUrl command.
> When entering this, I'm informed the features command does not exist, nor
> can I see it referenced in the available commands. There are other ways
> around this to use feature:repo-add mvn:blah, however, I'd like to be able
> to use the file protocol.
>
> So, for example, if I had a features.xml file in
> C:/feature_sandbox/features.xml, I'd like to do
> feature:repo-add file://C:/feature_sandbox/features.xml
>
> The above does not work either. The only other option I have is to use the
> karaf feature plugin, which I'm happy to do, but I was keen to get
> something
> faster in place before I add this plugin to the project. Is the file
> protocol option available?
>
> Secondly, has the features command been removed completely or is it
> something I need to install in addition to my base installation. I haven't
> seen the removal of this mentioned in some of the release notes, so it's
> omission is mystery to me based on the very recently posted articles
> regarding it's usage.
>
> Warm Regards,
> Stephen Munro
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Karaf-feature-xml-files-tp4049996.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: Blueprint fails instantiating bean with generic constructor

2017-04-03 Thread Guillaume Nodet
 There are 2 different issues.
One is type erasure, i.e. allow the invocation of a method taking a
List with a List for example.  That's ARIES-1607, and I
really think that's a bad idea, unless someone show me a good example where
it makes sense.  At least by default (well, it's against the blueprint spec
anyway).  A flag to turn on such a behavior (on a bean or globally) could
be an acceptable way, though.

Another issue is ARIES-960 where the same thing written in java would
work.  That's a problem of type assignability verification and I'm willing
to fix those.  I think that's more your use case.  There's a branch I
created a while ago with some changes.  Could you check if your use case
works with the code there ?

In all cases, a workaround is always to provide a custom blueprint
converter, as this would allow converting whatever you want to whatever is
needed.



2017-04-03 11:23 GMT+02:00 CLEMENT Jean-Philippe <
jean-philippe.clem...@fr.thalesgroup.com>:

> Hi Setya,
>
> It might be related to an issue I opened last year:
> https://issues.apache.org/jira/browse/ARIES-1607
>
> At that time I was told to add a custom converter as a workaround. No
> update on the Jira since then; maybe you may vote for it :)
>
> Regards,
> JP
>
> -Message d'origine-
> De : Setya [mailto:jse...@gmail.com]
> Envoyé : vendredi 31 mars 2017 19:09
> À : user@karaf.apache.org
> Objet : Blueprint fails instantiating bean with generic constructor
>
> Hi all,
>
> Aries Blueprint fails to instantiate bean with the following constructors:
>
> public AggregateAnnotationCommandHandler(Class aggregateType,
> Repository repository) { }
>
> It seems to have problems with the second argument since it contains
> generic parameter.
>
> While it successfully instantiates the following bean:
>
> public EventSourcingRepository(AggregateFactory aggregateFactory,
> EventStore eventStore) { }
>
> I'm using Apache Karaf 4.0.8.
>
> Any insight would be greatly appreciated.
>
> Thanks & Regards,
> Setya
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Blueprint-fails-instantiating-bean-with-generic-constructor-tp4049986.
> html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: Blueprint fails instantiating bean with generic constructor

2017-04-03 Thread Guillaume Nodet
I still am not comfortable allowing casting List to List
as we perfectly know what will happen.
Blueprint is not a compiler, but if you look at CDI, those kind of problems
have been handled correctly for example, and CDI is not a compiler either,
but both blueprint and CDI are dependency  injection framework, so there's
no technical reason to not be able to support the use cases correctly,
instead of allowing ClassCastException at a later time.

2017-04-03 15:46 GMT+02:00 CLEMENT Jean-Philippe <
jean-philippe.clem...@fr.thalesgroup.com>:

> Hi Guillaume,
>
>
>
> As already discussed, Blueprint is not a compiler but a runtime library.
> Once compiled there is no more generics as Java is a type erasure language.
> Moreover, I’m not too sure how Blueprint may handle injection with things
> like  S getSomething() where in java you can write myinstance.
> getSomething().
>
>
>
> I still do have issues with injection and generics, so a global flag to
> defeat Blueprint checking would be greatly appreciated :)
>
>
>
> JP
>
>
>
> *De :* Guillaume Nodet [mailto:gno...@apache.org]
> *Envoyé :* lundi 3 avril 2017 14:31
> *À :* user
> *Objet :* Re: Blueprint fails instantiating bean with generic constructor
>
>
>
>  There are 2 different issues.
>
> One is type erasure, i.e. allow the invocation of a method taking a
> List with a List for example.  That's ARIES-1607, and I
> really think that's a bad idea, unless someone show me a good example where
> it makes sense.  At least by default (well, it's against the blueprint spec
> anyway).  A flag to turn on such a behavior (on a bean or globally) could
> be an acceptable way, though.
>
>
>
> Another issue is ARIES-960 where the same thing written in java would
> work.  That's a problem of type assignability verification and I'm willing
> to fix those.  I think that's more your use case.  There's a branch I
> created a while ago with some changes.  Could you check if your use case
> works with the code there ?
>
>
>
> In all cases, a workaround is always to provide a custom blueprint
> converter, as this would allow converting whatever you want to whatever is
> needed.
>
>
>
>
>
>
>
> 2017-04-03 11:23 GMT+02:00 CLEMENT Jean-Philippe <
> jean-philippe.clem...@fr.thalesgroup.com>:
>
> Hi Setya,
>
> It might be related to an issue I opened last year:
> https://issues.apache.org/jira/browse/ARIES-1607
>
> At that time I was told to add a custom converter as a workaround. No
> update on the Jira since then; maybe you may vote for it :)
>
> Regards,
> JP
>
> -Message d'origine-
> De : Setya [mailto:jse...@gmail.com]
> Envoyé : vendredi 31 mars 2017 19:09
> À : user@karaf.apache.org
> Objet : Blueprint fails instantiating bean with generic constructor
>
>
> Hi all,
>
> Aries Blueprint fails to instantiate bean with the following constructors:
>
> public AggregateAnnotationCommandHandler(Class aggregateType,
> Repository repository) { }
>
> It seems to have problems with the second argument since it contains
> generic parameter.
>
> While it successfully instantiates the following bean:
>
> public EventSourcingRepository(AggregateFactory aggregateFactory,
> EventStore eventStore) { }
>
> I'm using Apache Karaf 4.0.8.
>
> Any insight would be greatly appreciated.
>
> Thanks & Regards,
> Setya
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Blueprint-fails-instantiating-bean-with-generic-constructor-tp4049986.
> html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>
>
>
>
>
> --
>
> 
> Guillaume Nodet
>
>
>



-- 

Guillaume Nodet


Re: Blueprint fails instantiating bean with generic constructor

2017-04-03 Thread Guillaume Nodet
2017-04-03 17:03 GMT+02:00 CLEMENT Jean-Philippe <
jean-philippe.clem...@fr.thalesgroup.com>:

> I understand your point and you’re right, it’s better to test things early
> than let the system crash later on. To go further I guess the Blueprint
> spec would have to be enhanced to fully support generics, for instance to
> be able to instantiate an ArrayList bean, to expose and retrieve
> services with generics signature etc.
>

That's what I did to some degree with my ARIES-960 branch:
  https://github.com/gnodet/aries/tree/ARIES-960/blueprint/blueprint-core

That's why I'd like some people to look at it and see if their actual use
case works.
>From that point:
  * they work, and that's great
  * they don't work and they should, and I'll work on fixing those
  * they don't work because they require ARIES-1633 which I don't have the
time to implement in the short term
In the third case, I could look at adding a flag...


>
>
> That said, a fallback strategy could also be welcome in the meantime. And
> it would be more efficient than letting the system try to convert, fail,
> then finally use the custom “erasure converter” defined in the Blueprint.
>
>
>
> Would it be possible to get a nice generics defeat flag?
>

Please provide a test case for my ARIES-960 branch which falls into the
third category above.


>
>
> Regards,
>
> JP
>
>
>
> *De :* Guillaume Nodet [mailto:gno...@apache.org]
> *Envoyé :* lundi 3 avril 2017 16:30
>
> *À :* user
> *Objet :* Re: Blueprint fails instantiating bean with generic constructor
>
>
>
> I still am not comfortable allowing casting List to List
> as we perfectly know what will happen.
>
> Blueprint is not a compiler, but if you look at CDI, those kind of
> problems have been handled correctly for example, and CDI is not a compiler
> either, but both blueprint and CDI are dependency  injection framework, so
> there's no technical reason to not be able to support the use cases
> correctly, instead of allowing ClassCastException at a later time.
>
>
>
> 2017-04-03 15:46 GMT+02:00 CLEMENT Jean-Philippe <
> jean-philippe.clem...@fr.thalesgroup.com>:
>
> Hi Guillaume,
>
>
>
> As already discussed, Blueprint is not a compiler but a runtime library.
> Once compiled there is no more generics as Java is a type erasure language.
> Moreover, I’m not too sure how Blueprint may handle injection with things
> like  S getSomething() where in java you can write myinstance.
> getSomething().
>
>
>
> I still do have issues with injection and generics, so a global flag to
> defeat Blueprint checking would be greatly appreciated :)
>
>
>
> JP
>
>
>
> *De :* Guillaume Nodet [mailto:gno...@apache.org]
> *Envoyé :* lundi 3 avril 2017 14:31
> *À :* user
> *Objet :* Re: Blueprint fails instantiating bean with generic constructor
>
>
>
>  There are 2 different issues.
>
> One is type erasure, i.e. allow the invocation of a method taking a
> List with a List for example.  That's ARIES-1607, and I
> really think that's a bad idea, unless someone show me a good example where
> it makes sense.  At least by default (well, it's against the blueprint spec
> anyway).  A flag to turn on such a behavior (on a bean or globally) could
> be an acceptable way, though.
>
>
>
> Another issue is ARIES-960 where the same thing written in java would
> work.  That's a problem of type assignability verification and I'm willing
> to fix those.  I think that's more your use case.  There's a branch I
> created a while ago with some changes.  Could you check if your use case
> works with the code there ?
>
>
>
> In all cases, a workaround is always to provide a custom blueprint
> converter, as this would allow converting whatever you want to whatever is
> needed.
>
>
>
>
>
>
>
> 2017-04-03 11:23 GMT+02:00 CLEMENT Jean-Philippe <
> jean-philippe.clem...@fr.thalesgroup.com>:
>
> Hi Setya,
>
> It might be related to an issue I opened last year:
> https://issues.apache.org/jira/browse/ARIES-1607
>
> At that time I was told to add a custom converter as a workaround. No
> update on the Jira since then; maybe you may vote for it :)
>
> Regards,
> JP
>
> -Message d'origine-
> De : Setya [mailto:jse...@gmail.com]
> Envoyé : vendredi 31 mars 2017 19:09
> À : user@karaf.apache.org
> Objet : Blueprint fails instantiating bean with generic constructor
>
>
> Hi all,
>
> Aries Blueprint fails to instantiate bean with the following constructors:
>
> public AggregateAnnotationCommandHandler(Class aggregateType,
> Repository repository) { }
>

Re: https://issues.apache.org/jira/browse/KARAF-4829

2017-04-03 Thread Guillaume Nodet
You can use a properties file.  Afaik, there's no problem with them and
they are fully supported.
Config files with typed properties will be supported through KARAF-5074
<https://issues.apache.org/jira/browse/KARAF-5074> which is a new feature.

For the location attribute, a new JIRA would be required.  But if you're
planning to use typed config files and use that attribute as a work around,
that's a bad idea, as other parts of Karaf will not support it.

Btw, if you want to speed up things a bit, as you seem quite vocal, one way
would be to provide some patches.

2017-04-03 18:41 GMT+02:00 CLEMENT Jean-Philippe <
jean-philippe.clem...@fr.thalesgroup.com>:

> Hi,
>
>
>
> The issue KARAF-4829 was closed. I thought I could now use the config tag
> as long with a location attribute, but I’m wrong!
>
>
>
> How to configure a feature in order configuration files to be installed
> prior to bundle startup?
>
>
>
> Thanks!
>
>
>
> Regards,
>
> JP
>
>
>
> *De :* Guillaume Nodet [mailto:gno...@apache.org]
> *Envoyé :* lundi 3 avril 2017 18:18
> *À :* user
> *Objet :* Re: Blueprint fails instantiating bean with generic constructor
>
>
>
>
>
>
>
> 2017-04-03 17:03 GMT+02:00 CLEMENT Jean-Philippe <
> jean-philippe.clem...@fr.thalesgroup.com>:
>
> I understand your point and you’re right, it’s better to test things early
> than let the system crash later on. To go further I guess the Blueprint
> spec would have to be enhanced to fully support generics, for instance to
> be able to instantiate an ArrayList bean, to expose and retrieve
> services with generics signature etc.
>
>
>
> That's what I did to some degree with my ARIES-960 branch:
>
>   https://github.com/gnodet/aries/tree/ARIES-960/blueprint/blueprint-core
>
>
>
> That's why I'd like some people to look at it and see if their actual use
> case works.
>
> From that point:
>
>   * they work, and that's great
>
>   * they don't work and they should, and I'll work on fixing those
>
>   * they don't work because they require ARIES-1633 which I don't have the
> time to implement in the short term
>
> In the third case, I could look at adding a flag...
>
>
>
>
>
> That said, a fallback strategy could also be welcome in the meantime. And
> it would be more efficient than letting the system try to convert, fail,
> then finally use the custom “erasure converter” defined in the Blueprint.
>
>
>
> Would it be possible to get a nice generics defeat flag?
>
>
>
> Please provide a test case for my ARIES-960 branch which falls into the
> third category above.
>
>
>
>
>
> Regards,
>
> JP
>
>
>
> *De :* Guillaume Nodet [mailto:gno...@apache.org]
> *Envoyé :* lundi 3 avril 2017 16:30
>
>
> *À :* user
> *Objet :* Re: Blueprint fails instantiating bean with generic constructor
>
>
>
> I still am not comfortable allowing casting List to List
> as we perfectly know what will happen.
>
> Blueprint is not a compiler, but if you look at CDI, those kind of
> problems have been handled correctly for example, and CDI is not a compiler
> either, but both blueprint and CDI are dependency  injection framework, so
> there's no technical reason to not be able to support the use cases
> correctly, instead of allowing ClassCastException at a later time.
>
>
>
> 2017-04-03 15:46 GMT+02:00 CLEMENT Jean-Philippe <
> jean-philippe.clem...@fr.thalesgroup.com>:
>
> Hi Guillaume,
>
>
>
> As already discussed, Blueprint is not a compiler but a runtime library.
> Once compiled there is no more generics as Java is a type erasure language.
> Moreover, I’m not too sure how Blueprint may handle injection with things
> like  S getSomething() where in java you can write myinstance.
> getSomething().
>
>
>
> I still do have issues with injection and generics, so a global flag to
> defeat Blueprint checking would be greatly appreciated :)
>
>
>
> JP
>
>
>
> *De :* Guillaume Nodet [mailto:gno...@apache.org]
> *Envoyé :* lundi 3 avril 2017 14:31
> *À :* user
> *Objet :* Re: Blueprint fails instantiating bean with generic constructor
>
>
>
>  There are 2 different issues.
>
> One is type erasure, i.e. allow the invocation of a method taking a
> List with a List for example.  That's ARIES-1607, and I
> really think that's a bad idea, unless someone show me a good example where
> it makes sense.  At least by default (well, it's against the blueprint spec
> anyway).  A flag to turn on such a behavior (on a bean or globally) could
> be an acceptable way, though.
>
>
&g

Re: https://issues.apache.org/jira/browse/KARAF-4829

2017-04-03 Thread Guillaume Nodet
Btw, I closed the jira because the initial problem is not a problem per
se.  When used properly, config files are pushed to ConfigAdmin before
bundles are started.  This only happen because of the use of  those typed
 properties files which aren't really supported yet.

2017-04-03 19:22 GMT+02:00 Guillaume Nodet :

> You can use a properties file.  Afaik, there's no problem with them and
> they are fully supported.
> Config files with typed properties will be supported through KARAF-5074
> <https://issues.apache.org/jira/browse/KARAF-5074> which is a new feature.
>
> For the location attribute, a new JIRA would be required.  But if you're
> planning to use typed config files and use that attribute as a work around,
> that's a bad idea, as other parts of Karaf will not support it.
>
> Btw, if you want to speed up things a bit, as you seem quite vocal, one
> way would be to provide some patches.
>
> 2017-04-03 18:41 GMT+02:00 CLEMENT Jean-Philippe <
> jean-philippe.clem...@fr.thalesgroup.com>:
>
>> Hi,
>>
>>
>>
>> The issue KARAF-4829 was closed. I thought I could now use the config tag
>> as long with a location attribute, but I’m wrong!
>>
>>
>>
>> How to configure a feature in order configuration files to be installed
>> prior to bundle startup?
>>
>>
>>
>> Thanks!
>>
>>
>>
>> Regards,
>>
>> JP
>>
>>
>>
>> *De :* Guillaume Nodet [mailto:gno...@apache.org]
>> *Envoyé :* lundi 3 avril 2017 18:18
>> *À :* user
>> *Objet :* Re: Blueprint fails instantiating bean with generic constructor
>>
>>
>>
>>
>>
>>
>>
>> 2017-04-03 17:03 GMT+02:00 CLEMENT Jean-Philippe <
>> jean-philippe.clem...@fr.thalesgroup.com>:
>>
>> I understand your point and you’re right, it’s better to test things
>> early than let the system crash later on. To go further I guess the
>> Blueprint spec would have to be enhanced to fully support generics, for
>> instance to be able to instantiate an ArrayList bean, to expose and
>> retrieve services with generics signature etc.
>>
>>
>>
>> That's what I did to some degree with my ARIES-960 branch:
>>
>>   https://github.com/gnodet/aries/tree/ARIES-960/blueprint/blueprint-core
>>
>>
>>
>> That's why I'd like some people to look at it and see if their actual use
>> case works.
>>
>> From that point:
>>
>>   * they work, and that's great
>>
>>   * they don't work and they should, and I'll work on fixing those
>>
>>   * they don't work because they require ARIES-1633 which I don't have
>> the time to implement in the short term
>>
>> In the third case, I could look at adding a flag...
>>
>>
>>
>>
>>
>> That said, a fallback strategy could also be welcome in the meantime. And
>> it would be more efficient than letting the system try to convert, fail,
>> then finally use the custom “erasure converter” defined in the Blueprint.
>>
>>
>>
>> Would it be possible to get a nice generics defeat flag?
>>
>>
>>
>> Please provide a test case for my ARIES-960 branch which falls into the
>> third category above.
>>
>>
>>
>>
>>
>> Regards,
>>
>> JP
>>
>>
>>
>> *De :* Guillaume Nodet [mailto:gno...@apache.org]
>> *Envoyé :* lundi 3 avril 2017 16:30
>>
>>
>> *À :* user
>> *Objet :* Re: Blueprint fails instantiating bean with generic constructor
>>
>>
>>
>> I still am not comfortable allowing casting List to List
>> as we perfectly know what will happen.
>>
>> Blueprint is not a compiler, but if you look at CDI, those kind of
>> problems have been handled correctly for example, and CDI is not a compiler
>> either, but both blueprint and CDI are dependency  injection framework, so
>> there's no technical reason to not be able to support the use cases
>> correctly, instead of allowing ClassCastException at a later time.
>>
>>
>>
>> 2017-04-03 15:46 GMT+02:00 CLEMENT Jean-Philippe <
>> jean-philippe.clem...@fr.thalesgroup.com>:
>>
>> Hi Guillaume,
>>
>>
>>
>> As already discussed, Blueprint is not a compiler but a runtime library.
>> Once compiled there is no more generics as Java is a type erasure language.
>> Moreover, I’m not too sure how Blueprint may handle injection with things
>> like  S getSomething() where in java you can write myinstance.
>>

Re: https://issues.apache.org/jira/browse/KARAF-4829

2017-04-04 Thread Guillaume Nodet
Configuration files for ConfigAdmin are supposed to be used with the
 element.
The  element has been designed to support non-configadmin
files, such as xml, binaries, or whatever.

In both cases, when they come from a feature, they will be installed to the
file system before the bundles listed in the features are started.
The problem is that if you're using  instead of , the
configuration won't be pushed to ConfigAdmin by the features service.  They
will only end up in ConfigAdmin through fileinstall, and that's an
asynchronous process.

In short, the current design is the following:
 * when using , the configuration is written synchronously to the
disk and config admin
 * when using , the file is written synchronously to the disk

Your use case about expecting a synchronous write to config admin when
using  is not supported currently.

The standard way to use  is to inline the configuration, see a very
simple example here:

https://github.com/apache/karaf/blob/master/assemblies/features/standard/src/main/feature/feature.xml#L876-L879
If you really need the location attribute on the  element, please
raise a JIRA.


2017-04-04 14:39 GMT+02:00 CLEMENT Jean-Philippe <
jean-philippe.clem...@fr.thalesgroup.com>:

> Hi Guillaume,
>
>
>
> Sorry, I’m lost :)
>
>
>
> At present time I guess we use “regular” .cfg files (key/value pairs in
> basic string format) via configfile tags. The issue is that they are
> populated in the /etc directory at the same time bundles start (during the
> very first start), so bundles may - or may not - find configuration
> properties. Here is an example of how we use configfile tags:
>
>
>
> 
> mvn:some.example/whatever/1.0.0/cfg
>
>
>
> As far as I understand bundles start and copy of .cfg file to etc being in
> parallel seems to be a nominal case and we should use the config tag
> instead of configfile to get .cfg files in the /etc before bundles start.
> Ok, so now I wonder how to use .cfg files with the config tag. I’m not
> complaining about anything… just trying to find out how it works.
>
>
>
> Ø  When used properly, config files are pushed to ConfigAdmin before
> bundles are started
>
>
>
> Something like mvn:some.
> example/whatever/1.0.0/cfg?
>
>
>
> Regards,
>
> JP
>
>
>
> *De :* Guillaume Nodet [mailto:gno...@apache.org]
> *Envoyé :* lundi 3 avril 2017 19:26
> *À :* user
> *Objet :* Re: https://issues.apache.org/jira/browse/KARAF-4829
>
>
>
> Btw, I closed the jira because the initial problem is not a problem per
> se.  When used properly, config files are pushed to ConfigAdmin before
> bundles are started.  This only happen because of the use of  those typed
>  properties files which aren't really supported yet.
>
>
>
> 2017-04-03 19:22 GMT+02:00 Guillaume Nodet :
>
> You can use a properties file.  Afaik, there's no problem with them and
> they are fully supported.
>
> Config files with typed properties will be supported through KARAF-5074
> <https://issues.apache.org/jira/browse/KARAF-5074> which is a new feature.
>
>
>
> For the location attribute, a new JIRA would be required.  But if you're
> planning to use typed config files and use that attribute as a work around,
> that's a bad idea, as other parts of Karaf will not support it.
>
>
>
> Btw, if you want to speed up things a bit, as you seem quite vocal, one
> way would be to provide some patches.
>
>
>
> 2017-04-03 18:41 GMT+02:00 CLEMENT Jean-Philippe <
> jean-philippe.clem...@fr.thalesgroup.com>:
>
> Hi,
>
>
>
> The issue KARAF-4829 was closed. I thought I could now use the config tag
> as long with a location attribute, but I’m wrong!
>
>
>
> How to configure a feature in order configuration files to be installed
> prior to bundle startup?
>
>
>
> Thanks!
>
>
>
> Regards,
>
> JP
>
>
>
> *De :* Guillaume Nodet [mailto:gno...@apache.org]
> *Envoyé :* lundi 3 avril 2017 18:18
> *À :* user
> *Objet :* Re: Blueprint fails instantiating bean with generic constructor
>
>
>
>
>
>
>
> 2017-04-03 17:03 GMT+02:00 CLEMENT Jean-Philippe <
> jean-philippe.clem...@fr.thalesgroup.com>:
>
> I understand your point and you’re right, it’s better to test things early
> than let the system crash later on. To go further I guess the Blueprint
> spec would have to be enhanced to fully support generics, for instance to
> be able to instantiate an ArrayList bean, to expose and retrieve
> services with generics signature etc.
>
>
>
> That's what I did to some degree with my ARIES-960 branch:
>
>   https://github.com/gnodet/aries/tree/ARIES-960/blueprint/blueprint-core
>

Re: bundle:watch

2017-04-04 Thread Guillaume Nodet
The requirement is that the bundle uses a maven snapshot url.
If that's the case, the bundle:watch command will look for updates in the
local repository.
The only command I'm using for the bundle:watch is "bundle:watch *" which
updates all maven snapshots whenever I rebuild something.

2017-04-04 10:41 GMT+02:00 Cristiano Costantini <
cristiano.costant...@gmail.com>:

> Hello all,
>
> I am trying to use for the first time the bundle:watch command,
> however I cannot get it to work correctly...
>
> I watch a bundle with bundle:watch ID, I also start explicitly watching
> with bundle:watch --start, the bundle is correctly listed when I execute
> bundle:watch --list , but when I recompile my bundle using mvn clean
> install, it is not updated automatically.
>
> I'm using Karaf 4.0.5
>
> Am I doing something wrong?
> Is anyone else experiencing the same issue?
>
> Thanks
> Cristiano
>
>


-- 

Guillaume Nodet


Re: FW: Change in behaviour of karaf-maven-plugin w.r.t. dependencies for feature packaging between 4.0.x and 4.1.x

2017-04-12 Thread Guillaume Nodet
Didn't we discussed that on dev@ a while ago ?
The thread is named "[DISCUSS] Feature package, feature generation and
validation", started on 13/10/2016.
IIRC, the outcome was to remove the  ‘feature-generate-descriptor’ goal
from the ‘feature’ packaging by default.

2017-04-12 16:36 GMT+02:00 Jean-Baptiste Onofré :

> Hi Arunan,
>
> let me take a look but it sounds like a bug to me.
>
> I keep you posted.
>
> Regards
> JB
>
>
> On 04/12/2017 04:25 PM, Arunan Balasubramaniam wrote:
>
>>   Hello,
>>
>>   With updating to Karaf 4.1.1 I have found a change in behaviour in the
>> karaf-maven-plugin seemingly no longer adding bundles from POM
>> dependencies. I have put a testcase up at https://github.com/arunan-inte
>> rlink/karaf-maven-plugin-features-change that demonstrates the issue.
>> Changing the version of karaf-maven-plugin used in the top level POM shows
>> the different behaviours in the output feature file.
>>
>>   Has anybody else had similar issues? Has anybody else started using the
>> 4.1.x Maven plugins and had this work?
>>
>>   My apologies if I have missed a release note, existing issue or if this
>> is an incorrect configuration.
>>
>>   Regards,
>>   Arunan
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



-- 

Guillaume Nodet


Re: Karaf 4.1.1 Console Issues Over SSH (PuTTY)

2017-04-14 Thread Guillaume Nodet
hd.core:1.4.0]
> at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.8.0_92]
> at sun.nio.ch.Invoker$2.run(Invoker.java:218)[:1.8.0_92]
> at
> sun.nio.ch.AsynchronousChannelGroupImpl$1.run(
> AsynchronousChannelGroupImpl.java:112)[:1.8.0_92]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)[:1.8.0_92]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)[:1.8.0_92]
> at java.lang.Thread.run(Thread.java:745)[:1.8.0_92]
> Caused by: java.io.IOException: CreateProcess error=2, The system cannot
> find the file specified
> at java.lang.ProcessImpl.create(Native Method)[:1.8.0_92]
> at java.lang.ProcessImpl.(ProcessImpl.java:386)[:1.8.0_92]
> at java.lang.ProcessImpl.start(ProcessImpl.java:137)[:1.8.0_92]
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:
> 1029)[:1.8.0_92]
> ... 30 more
>
>
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Karaf-4-1-1-Console-Issues-Over-SSH-PuTTY-tp4050131.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: Issue with bin/shell command 4.1.1

2017-04-24 Thread Guillaume Nodet
Could you please raise a JIRA ? I'll have a look at it asap.

2017-04-24 17:50 GMT+02:00 Frédéric Curvat :

> Hello,
>
> When i try to run this simple bin/shell command i get the exception :
>
> $ bin/shell 'wrapper:install --help'
> [org.apache.karaf.shell.impl.console.ConsoleSessionImpl] : completionMode
> property is not defined in etc/org.apache.karaf.shell.cfg file. Using
> default completion mode.
> [org.apache.karaf.shell.support.ShellUtil] : Command exception (Undefined
> option, ...)
> org.apache.karaf.shell.support.CommandException: Too many arguments
> specified
> at org.apache.karaf.shell.impl.action.command.DefaultActionPreparator.
> prepare(DefaultActionPreparator.java:160)
> at org.apache.karaf.shell.impl.action.command.ActionCommand.
> execute(ActionCommand.java:83)
> at org.apache.karaf.shell.impl.console.CommandWrapper.
> execute(CommandWrapper.java:53)
> at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:560)
> at org.apache.felix.gogo.runtime.Closure.executeStatement(
> Closure.java:486)
> at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:375)
> at org.apache.felix.gogo.runtime.Pipe.doCall(Pipe.java:417)
> at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:229)
> at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:59)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Error executing command wrapper:install: too many arguments specified
>
> This is happening on 4.1.1, it did not happen on 4.0.7.
>
> Regards,
>



-- 

Guillaume Nodet


Re: Modularizing our code => How to debug cyclic dependencies on the features

2017-04-25 Thread Guillaume Nodet
download(MavenDownloadManager.java:127)
> at org.apache.karaf.profile.assembly.Builder$4.downloaded(Build
> er.java:1154)
> - locked <0x0005f5a017e8> (a java.util.HashMap)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader$1.operationComplete(MavenDownloadM
> anager.java:133)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader$1.operationComplete(MavenDownloadM
> anager.java:127)
> at org.apache.karaf.features.internal.download.impl.DefaultFutu
> re.notifyListener(DefaultFuture.java:344)
> at org.apache.karaf.features.internal.download.impl.DefaultFutu
> re.addListener(DefaultFuture.java:293)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader.download(MavenDownloadManager.java:127)
> at org.apache.karaf.profile.assembly.Builder$4.downloaded(Build
> er.java:1154)
> - locked <0x0005f5a017e8> (a java.util.HashMap)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader$1.operationComplete(MavenDownloadM
> anager.java:133)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader$1.operationComplete(MavenDownloadM
> anager.java:127)
> at org.apache.karaf.features.internal.download.impl.DefaultFutu
> re.notifyListener(DefaultFuture.java:344)
> at org.apache.karaf.features.internal.download.impl.DefaultFutu
> re.addListener(DefaultFuture.java:293)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader.download(MavenDownloadManager.java:127)
> at org.apache.karaf.profile.assembly.Builder$4.downloaded(Build
> er.java:1154)
> - locked <0x0005f5a017e8> (a java.util.HashMap)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader$1.operationComplete(MavenDownloadM
> anager.java:133)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader$1.operationComplete(MavenDownloadM
> anager.java:127)
> at org.apache.karaf.features.internal.download.impl.DefaultFutu
> re.notifyListener(DefaultFuture.java:344)
> at org.apache.karaf.features.internal.download.impl.DefaultFutu
> re.addListener(DefaultFuture.java:293)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader.download(MavenDownloadManager.java:127)
> at org.apache.karaf.profile.assembly.Builder$4.downloaded(Build
> er.java:1154)
> - locked <0x0005f5a017e8> (a java.util.HashMap)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader$1.operationComplete(MavenDownloadM
> anager.java:133)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader$1.operationComplete(MavenDownloadM
> anager.java:127)
> at org.apache.karaf.features.internal.download.impl.DefaultFutu
> re.notifyListener(DefaultFuture.java:344)
> at org.apache.karaf.features.internal.download.impl.DefaultFutu
> re.addListener(DefaultFuture.java:293)
> at org.apache.karaf.features.internal.download.impl.MavenDownlo
> adManager$MavenDownloader.download(MavenDownloadManager.java:127)
> at org.apache.karaf.profile.assembly.Builder$4.downloaded(Build
> er.java:1154)
> ...
> ...
> many more repetitions of the same stack...
> ...
> ...
>
>
> Thanks again for your help!
>



-- 

Guillaume Nodet


Re: Karaf 4.1.x / httplite incompatibility

2017-04-26 Thread Guillaume Nodet
I suppose it's mostly a matter of having a new release of httplite which
could support the servlet api 3.1 in the range.

2017-04-27 6:25 GMT+02:00 Jean-Baptiste Onofré :

> Hi,
>
> do you have a test case to reproduce it ? I will take a look.
>
> Regards
> JB
>
>
> On 04/26/2017 01:24 PM, Stephen Winnall wrote:
>
>> I haven’t resolved this, but it isn’t actually causing me any problems at
>> the moment (i.e. I’m ignoring it). It just makes the log files look untidy.
>>
>> I’m a bit surprised no-one else commented on it, though.
>>
>> Steve
>>
>> On 26 Apr 2017, at 05:36, Mark Derricutt  wrote:
>>>
>>> On 10 Apr 2017, at 21:40, Stephen Winnall wrote:
>>>
>>> I am trying to build a Karaf assembly using Karaf 4.1.1, Java
>>>> 1.8.0_76-ea-b04, Maven 3.3.9, Netbeans 8.2 and macOS 10.12.4. I am getting
>>>> an error message, even if I omit all my own features from the build (i.e. I
>>>> build an empty Karaf):
>>>>
>>>
>>> Stephen - Did you ever resolve this? I was hit with the same thing
>>> updating from 4.0.8 straight to 4.1.1, opted to just migrate to 4.0.9
>>> instead for now.
>>>
>>> --
>>> Mark Derricutt
>>> http://www.theoryinpractice.net
>>> http://www.chaliceofblood.net
>>> http://plus.google.com/+MarkDerricutt
>>> http://twitter.com/talios
>>> http://facebook.com/mderricutt
>>>
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



-- 

Guillaume Nodet


Re: Karaf 4.1.x / httplite incompatibility

2017-04-26 Thread Guillaume Nodet
I've fixed the httplite headers.

We should still investigate why httplite is involved in the problem at all,
as it should not be installed unless explicitely required.

2017-04-27 8:52 GMT+02:00 Guillaume Nodet :

> I suppose it's mostly a matter of having a new release of httplite which
> could support the servlet api 3.1 in the range.
>
> 2017-04-27 6:25 GMT+02:00 Jean-Baptiste Onofré :
>
>> Hi,
>>
>> do you have a test case to reproduce it ? I will take a look.
>>
>> Regards
>> JB
>>
>>
>> On 04/26/2017 01:24 PM, Stephen Winnall wrote:
>>
>>> I haven’t resolved this, but it isn’t actually causing me any problems
>>> at the moment (i.e. I’m ignoring it). It just makes the log files look
>>> untidy.
>>>
>>> I’m a bit surprised no-one else commented on it, though.
>>>
>>> Steve
>>>
>>> On 26 Apr 2017, at 05:36, Mark Derricutt  wrote:
>>>>
>>>> On 10 Apr 2017, at 21:40, Stephen Winnall wrote:
>>>>
>>>> I am trying to build a Karaf assembly using Karaf 4.1.1, Java
>>>>> 1.8.0_76-ea-b04, Maven 3.3.9, Netbeans 8.2 and macOS 10.12.4. I am getting
>>>>> an error message, even if I omit all my own features from the build (i.e. 
>>>>> I
>>>>> build an empty Karaf):
>>>>>
>>>>
>>>> Stephen - Did you ever resolve this? I was hit with the same thing
>>>> updating from 4.0.8 straight to 4.1.1, opted to just migrate to 4.0.9
>>>> instead for now.
>>>>
>>>> --
>>>> Mark Derricutt
>>>> http://www.theoryinpractice.net
>>>> http://www.chaliceofblood.net
>>>> http://plus.google.com/+MarkDerricutt
>>>> http://twitter.com/talios
>>>> http://facebook.com/mderricutt
>>>>
>>>
>>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>
>
>
> --
> 
> Guillaume Nodet
>
>


-- 

Guillaume Nodet


Re: Karaf 4.1.x / httplite incompatibility

2017-04-27 Thread Guillaume Nodet
Could you please raise a JIRA issue and attach your project ?
I'll try to have a look.

2017-04-27 13:42 GMT+02:00 Stephen Winnall :

> I attach the source tree of a test case. Just unpack, build and run, and
> log:display.
>
> Steve
>
>
> > On 27 Apr 2017, at 06:25, Jean-Baptiste Onofré  wrote:
> >
> > Hi,
> >
> > do you have a test case to reproduce it ? I will take a look.
> >
> > Regards
> > JB
> >
> > On 04/26/2017 01:24 PM, Stephen Winnall wrote:
> >> I haven’t resolved this, but it isn’t actually causing me any problems
> at the moment (i.e. I’m ignoring it). It just makes the log files look
> untidy.
> >>
> >> I’m a bit surprised no-one else commented on it, though.
> >>
> >> Steve
> >>
> >>> On 26 Apr 2017, at 05:36, Mark Derricutt  wrote:
> >>>
> >>> On 10 Apr 2017, at 21:40, Stephen Winnall wrote:
> >>>
> >>>> I am trying to build a Karaf assembly using Karaf 4.1.1, Java
> 1.8.0_76-ea-b04, Maven 3.3.9, Netbeans 8.2 and macOS 10.12.4. I am getting
> an error message, even if I omit all my own features from the build (i.e. I
> build an empty Karaf):
> >>>
> >>> Stephen - Did you ever resolve this? I was hit with the same thing
> updating from 4.0.8 straight to 4.1.1, opted to just migrate to 4.0.9
> instead for now.
> >>>
> >>> --
> >>> Mark Derricutt
> >>> http://www.theoryinpractice.net
> >>> http://www.chaliceofblood.net
> >>> http://plus.google.com/+MarkDerricutt
> >>> http://twitter.com/talios
> >>> http://facebook.com/mderricutt
> >>
> >
> > --
> > Jean-Baptiste Onofré
> > jbono...@apache.org
> > http://blog.nanthrax.net
> > Talend - http://www.talend.com
>
>
>


-- 

Guillaume Nodet


Re: Karaf 4.1.1 console issue with list and grep

2017-05-03 Thread Guillaume Nodet
It's not really an auto-completion, as if you hit enter, the final space +
quote won't appear in the command.  They should appear in grey, and they
are a hint that your command is incomplete.  Just enter "foo bar" with the
quotes and it should be fine.

2017-05-03 20:34 GMT+02:00 asookazian2 :

> bundle:list | grep -i "foo "
>
> On Mac the space and end quote is auto-completing when I the the first
> quote.  I can't delete the space.  So I can't search on "foo bar".  Known
> issue/bug?
>
> Does not reproduce on Ubuntu.
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Karaf-4-1-1-console-issue-with-list-and-grep-tp4050295.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: Custom distribution - different feature types in karaf-maven-plugin

2017-05-04 Thread Guillaume Nodet
2017-05-04 13:30 GMT+02:00 Siano, Stephan :

> Hi,
>
> There is some documentation on how to create a custom distribution of
> karaf. In general it seems to be recommended to use the karaf-maven-plugin
> for that.
>
> The features preinstalled in the custom distribution can be defined by
> different configurations in the karaf-maven-plugin, but I am not sure
> whether I really understood the functionality correctly:
>
> The  configuration can contain a single feature from the
> framework kar. This is usually "framework", but "framework-logback" is also
> possible.
>
> The , , and  tags can
> all contain a list of features that are available in feature dependencies
> (with runtime scope).
>
> If I got that right, the installedFeatures are available on the running
> node and can be installed without network access but are not installed and
> started by default. Is this correct?
>

Yes


>
> What is the difference between the bootFeatures and the startupFeatures?
> The documentation states that startupFeatures are written to the
> startup.properties with the appropriate start level whereas bootFeatures
> are added to boot-features in the feature service, but what is the
> difference between these two approaches?
>
> The framework feature also goes into the startup.properties, so why is the
> framework handled separately?
>

Because the framework has to be a kar file and provides the startup scripts
and basic configuration files.
The framework property itself is not required, the plugin can infer it from
the dependencies list if you have a dependency on
mvn:org.apache.karaf.features/framework/ or
mvn:org.apache.karaf.features/static/ kar.

Also and fwiw, there are 4 framework supported atm: framework,
framework-logback, static-framework, static-framework-logback.
The static ones are used to build "static" distributions, i.e. the ways to
deploy bundles at runtime or even change configuration are removed.  Those
are useful when building micro-services applications for example.



> Best regards
> Stephan
>



-- 

Guillaume Nodet


Re: How does blacklist works?

2017-05-08 Thread Guillaume Nodet
You can blacklist features using the following syntax
  *featureName*[;type=feature][;range=*versionRange*]
Where
  * *featureName* is the name of the feature to blacklist
  * *versionRange* is an optional version range to restrict the blacklisted
features

For a bundle, this need to be
  *bundle*;[type=bundle][;url=*bundleUrl*]
or
  *bundleUrl*[;type=bundle]
Where
  * *bundle* can be anything
  * *bundleUrl* is the full url of the bundle

Guillaume

2017-05-08 23:16 GMT+02:00 Castor :

> I'm using Karaf 4.0.8
>
> Investigating the karaf-maven-plugin i found the following tag:
>
> 
>   
> 
>
> For testing purposes, i created a bundle foo.bar and blacklisted it
>
> 
>   org.foo.bar/*
> 
>
> it generated a blacklist.properties with:
>
> #
> # Generated by the karaf assembly builder
> #
>
> # Bundles
> org.foo.bar/*
>
> But still, when i install a feature containing the blacklisted bundle, the
> bundle gets installed, i'm not sure if i got the idea of blacklist wrong,
> but i think the bundle shouldn't get installed.
>
> Am i using the blacklist in a wrong way?
>
>
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/How-does-blacklist-works-tp4050317.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: Karaf 4.x & Shiro Support

2017-05-09 Thread Guillaume Nodet
I've provided a better PR at https://github.com/apache/shiro/pull/63

2017-05-08 13:01 GMT+02:00 Castor :

> I'm also using Apache Shiro in production, an yes, there is a problem with
> the shiro-feature, there is a open pull request to fix that.
>
> https://github.com/apache/shiro/pull/43
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Karaf-4-x-Shiro-Support-tp4050311p4050314.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: materialize bundles?

2017-05-17 Thread Guillaume Nodet
Not really.
What would be the use of such a directory ?
Using that as the deploy folder for a newly created instance ? You'd loose
some information such as configurations, start levels, etc...

2017-05-18 0:55 GMT+02:00 Scott Lewis :

> Is there any easy way (e.g. console or mvn command) to materialize all the
> bundles/versions in a given running instance of Karaf and put them as jars
> into a target directory?
>
>
>
>


-- 
----
Guillaume Nodet


Re: materialize bundles?

2017-05-18 Thread Guillaume Nodet
The karaf maven plugin is perfectly suited to create custom distributions.
We do use it to create the karaf official distributions, so unless
something is missing, I'd suggest having a look at it.
See for example:

https://github.com/apache/karaf/blob/master/assemblies/apache-karaf-minimal/pom.xml#L102-L148

2017-05-18 22:51 GMT+02:00 Scott Lewis :

> On 5/17/2017 11:24 PM, Guillaume Nodet wrote:
>
>> Not really.
>> What would be the use of such a directory ?
>>
>
> It could be used to develop to and package custom distributions that are
> based upon Karaf and other projects...for example OpenHab.
>
>
> Using that as the deploy folder for a newly created instance ? You'd loose
>> some information such as configurations, start levels, etc...
>>
>
> Yes that's true/understood.
>
> I suppose what I'm getting at is:  how to most easily develop to and
> create custom distributions...not of Karaf directly, but perhaps Karaf +
> OpenHAB + others.
>
> Scott
>
>


-- 

Guillaume Nodet


Re: strange issue with karaf-maven-plugin

2017-05-30 Thread Guillaume Nodet
he
> installed versions are not only 6.0.4 but also 6.0.3.
>
> Does anyone have an idea what is going on here? What do I need to change
> to avoid installing pax-http-jetty and pax-jetty? Why are actually both
> versions installed (6.0.4 and 6.0.4)? I could not find any installed
> feature that depends on pax-http-tomcat in version 6.0.3.
>
> Best regards
> Stephan
>
>


-- 

Guillaume Nodet


Re: Karaf-4346 JIRA

2017-06-01 Thread Guillaume Nodet
First, it has to be xx.config not xx.cfg.  Before 4.2, both are handled
differently.
So in 4.1, you need to use the config syntax:
  myInts = i[ "10", "5", "4" ]

See the syntax description:

https://github.com/apache/felix/blob/trunk/configadmin/src/main/java/org/apache/felix/cm/file/ConfigurationHandler.java#L53-L61

2017-06-01 22:38 GMT+02:00 Leschke, Scott :

> The status of this is listed as fixed as of 4.1 but I can’t seem to get it
> to work.  I’m trying to set an int[] neither of the following worked. I get
> a NumberFormatException complaining about input string “{15,10,5}” or
> alternatively “[15,10,5]”;
>
>
>
> myInts = {10,5,0};
>
> myInts = [10,5,0];
>
>
>
> There’s discussion in the JIRA about the filter needing to be updated to
> support .cfg files but it’s looks like that change was made. So if this was
> in fact fixed, what’s the correct syntax?  I would think it would be the
> first. It works for defining the default in the config. def interface.
>
>
>
> Scott
>
>
>
>
>



-- 

Guillaume Nodet


Re: Karaf-4346 JIRA

2017-06-01 Thread Guillaume Nodet
The difference between 4.1 and 4.2 is that with 4.2, both cfg and config
files can hold typed or untyped configurations and karaf will switch to a
typed configuration automatically if needed.  The syntax will be the same
however.  And there's currently no ETA defined.

2017-06-01 23:21 GMT+02:00 Leschke, Scott :

> *From your answer I gather this will be fully sussed out in 4.2, i.e. .cfg
> will be supported and the syntax may change somewhat. Is there an ETA on
> that release?*
>
>
>
> *From:* Guillaume Nodet [mailto:gno...@apache.org]
> *Sent:* Thursday, June 01, 2017 4:11 PM
> *To:* user 
> *Subject:* Re: Karaf-4346 JIRA
>
>
>
> First, it has to be xx.config not xx.cfg.  Before 4.2, both are handled
> differently.
>
> So in 4.1, you need to use the config syntax:
>
>   myInts = i[ "10", "5", "4" ]
>
>
>
> See the syntax description:
>
>   https://github.com/apache/felix/blob/trunk/configadmin/
> src/main/java/org/apache/felix/cm/file/ConfigurationHandler.java#L53-L61
>
>
>
> 2017-06-01 22:38 GMT+02:00 Leschke, Scott :
>
> The status of this is listed as fixed as of 4.1 but I can’t seem to get it
> to work.  I’m trying to set an int[] neither of the following worked. I get
> a NumberFormatException complaining about input string “{15,10,5}” or
> alternatively “[15,10,5]”;
>
>
>
> myInts = {10,5,0};
>
> myInts = [10,5,0];
>
>
>
> There’s discussion in the JIRA about the filter needing to be updated to
> support .cfg files but it’s looks like that change was made. So if this was
> in fact fixed, what’s the correct syntax?  I would think it would be the
> first. It works for defining the default in the config. def interface.
>
>
>
> Scott
>
>
>
>
>
>
>
>
>
> --
>
> 
> Guillaume Nodet
>
>
>



-- 

Guillaume Nodet


Re: Unable to deploy jna 4.3.0 in Karaf 4.1.1 or 4.0.9

2017-06-02 Thread Guillaume Nodet
JNA is an optional dependency on JLine (used by the shell), so if you
install JNA, it will refresh jline, hence the karaf shell.
However, the shell should come back immediately in a usable state.
You can reproduce using:
> install mvn:net.java.dev.jna/jna/RELEASE
> refresh org.jline

On 4.1.2-SNAPSHOT, the shell comes back correctly.

2017-06-02 7:10 GMT+02:00 Shyalika Benthotage <
shyalika.benthot...@medialinksaustralia.com.au>:

> Hi,
>
> We have been using 3.0.3 for some time now and trying to migrate to karaf
> 4.x. As a part of our application, when I try to deploy jna 4.3.0 or 4.4.0
> it sometimes just stay in the 'starting' state or undeploys everything
> including the shell, causing it to exit.
>
> I have attached the log here.
>
> Can anyone shed some light on what's happening here?
>
> Thanks,
>
> Shyalika
>



-- 

Guillaume Nodet


Re: Unable to deploy jna 4.3.0 in Karaf 4.1.1 or 4.0.9

2017-06-06 Thread Guillaume Nodet
Jun 06, 2017 4:19:37 PM org.apache.karaf.main.lock.SimpleFileLock lock
> INFO: Trying to lock /Users/shyalika/Downloads/apache-karaf-4.1.1/lock
> Jun 06, 2017 4:19:37 PM org.apache.karaf.main.lock.SimpleFileLock lock
> INFO: Lock acquired
>
>
> On Mon, Jun 5, 2017 at 4:22 PM, Shyalika Benthotage  medialinksaustralia.com.au> wrote:
>
>> Please note that we cannot work with a snapshot. We are trying to use
>> 4.1.1 or any 4.x stable version for our production. I have attached a
>> sample feature file that I am trying to deploy to karaf 4.1.1 and the karaf
>> log when it failed. Also please see the karaf console behaviour below. This
>> feature file might work at a random occasion, but you never know when it
>> would fail. I have been trying the whole day to come up with a pattern, but
>> it is completely random. Please let me know what I am doing wrong here.
>>
>>
>> I am on OSX Captain 10.11.6
>>
>> sh-3.2# bin/karaf clean
>>
>> __ __  
>>
>>/ //_/ __ _/ __/
>>
>>   / ,<  / __ `/ ___/ __ `/ /_
>>
>>  / /| |/ /_/ / /  / /_/ / __/
>>
>> /_/ |_|\__,_/_/   \__,_/_/
>>
>>
>> *  Apache Karaf* (4.1.1)
>>
>>
>> Hit '**' for a list of available commands
>>
>> and '*[cmd] --help*' for help on a specific command.
>>
>> Hit '**' or type '*system:shutdown*' or '*logout*' to shutdown
>> Karaf.
>>
>>
>> *karaf*@root()>
>>
>>
>> 16:09:33
>>
>> java.lang.IllegalStateException: No inital startlevel yet
>>
>> at org.apache.felix.framework.FrameworkStartLevelImpl.setStartL
>> evel(FrameworkStartLevelImpl.java:131)
>>
>> at org.apache.karaf.main.Main.setStartLevel(Main.java:605)
>>
>> at org.apache.karaf.main.Main$KarafLockCallback.lockAquired(Mai
>> n.java:711)
>>
>> at org.apache.karaf.main.Main.doMonitor(Main.java:382)
>>
>> at org.apache.karaf.main.Main.access$100(Main.java:75)
>>
>> at org.apache.karaf.main.Main$3.run(Main.java:369)
>>
>>
>>
>>
>> On Fri, Jun 2, 2017 at 6:30 PM, Guillaume Nodet 
>> wrote:
>>
>>> JNA is an optional dependency on JLine (used by the shell), so if you
>>> install JNA, it will refresh jline, hence the karaf shell.
>>> However, the shell should come back immediately in a usable state.
>>> You can reproduce using:
>>> > install mvn:net.java.dev.jna/jna/RELEASE
>>> > refresh org.jline
>>>
>>> On 4.1.2-SNAPSHOT, the shell comes back correctly.
>>>
>>> 2017-06-02 7:10 GMT+02:00 Shyalika Benthotage <
>>> shyalika.benthot...@medialinksaustralia.com.au>:
>>>
>>>> Hi,
>>>>
>>>> We have been using 3.0.3 for some time now and trying to migrate to
>>>> karaf 4.x. As a part of our application, when I try to deploy jna 4.3.0 or
>>>> 4.4.0 it sometimes just stay in the 'starting' state or undeploys
>>>> everything including the shell, causing it to exit.
>>>>
>>>> I have attached the log here.
>>>>
>>>> Can anyone shed some light on what's happening here?
>>>>
>>>> Thanks,
>>>>
>>>> Shyalika
>>>>
>>>
>>>
>>>
>>> --
>>> 
>>> Guillaume Nodet
>>>
>>>
>>
>


-- 

Guillaume Nodet


Re: new shell behavior regarding the threads pooling with Karaf 4.1

2017-06-08 Thread Guillaume Nodet
It has changed due to support for redirections and jobs in the console.
However, the assumption was already wrong with 4.0.x, for example:

*karaf*@root()> thread-name = { ((($.context bundle) loadClass
java.lang.Thread) currentThread) name }

((($.context bundle) loadClass java.lang.Thread) currentThread) name

*karaf*@root()> echo $(thread-name)

Karaf local console user karaf

*karaf*@root()> echo foo | echo $(thread-name)

pipe-[echo, $(thread-name)]

As you can see, in the second case, the thread is not the main one.
In Karaf 4.1.x, you can launch jobs in the background using the & operator,
or move them in the background at a later time using ^Z + fg.  So jobs have
to be launched in a separate thread.

On the security side, the way to obtain the karaf user is to use the jaas
subject.  It should be propagated throughout the session (though I just
found KARAF-5183).  I think if you're using a InheritableThreadLocal, it
should almost work (though you'll have the same problems as described in
KARAF-5183 I suppose).

2017-06-07 16:45 GMT+02:00 Nicolas Brasey :

> Hi guys!
>
> We are currently migrating from karaf 4.0.8 to 4.1.1 and discovered a new
> behavior in the shell. The new karaf does not dedicate a thread to the
> shell session, that means different commands in the same shell session
> might run in a different thread, which break our authentication mechanism
> which is maintained in a threadlocal.
>
> In the past, we never experienced karaf using different threads for
> executing different commands in the same shell session.
>
> So the question is: Is this change intentional from the Karaf team, or our
> assumption to have a single threaded session is wrong ?
>
> Thanks a lot!
>
> Nicolas
>



-- 

Guillaume Nodet


Re: Feature install runs 30 minutes finishes wiith out of memory error in Felix resolver code

2017-06-08 Thread Guillaume Nodet
What about just the following command in the karaf console ?
  update org.apache.karaf.features.core
mvn:org.apache.karaf.features/org.apache.karaf.features.core/4.1.1

2017-06-08 19:59 GMT+02:00 jholloway :

> We are currently using Karaf 4.0.9.  We have a feature file that has worked
> for us for over a year.  We have noticed a slight degradation during the
> feature install in the past few months as we have added bundles and
> sub-features to our feature file.  Recently, we added numerous new
> sub-features and bundles to our one feature file.  Currently, the feature
> install hangs between 15 and 45 minutes until we get a Java heap space out
> of memory error.  The print stack trace shows that it is in the Felix
> resolver the entire time.  However, when we downloaded, setup and
> configured
> Karaf 4.1.1, the same feature file installed with no problem and very
> quickly.  Is there something we can do within Karaf 4.0.9 to make it
> process
> our feature file more efficiently without hanging?  Officially upgrading to
> Karaf 4.1.1 is out of our control.
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Feature-install-runs-30-minutes-finishes-wiith-out-of-
> memory-error-in-Felix-resolver-code-tp4050625.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: Understanding features and dependencies

2017-06-13 Thread Guillaume Nodet
2017-06-13 10:35 GMT+02:00 Stephen Kitt :

> Hi,
>
> As we continue to try to migrate OpenDaylight to Karaf 4, it’s become
> very clear that we need to understand exactly what feature dependencies
> entail. (And yes, I’ll submit documentation patches once we’ve done
> so.)
>
> OpenDaylight (ODL) defines lots and lots of features, some of which
> contain bundles which survive refreshes, many of which don’t. In
> real-world deployments, the Karaf distributions which are built tend to
> use boot features to start the features they’re interested in. We make
> extensive use of feature dependencies of course, e.g.
>
> 
> odl-ttp-model
> odl-mdsal-apidocs
> odl-restconf
> odl-mdsal-broker
> 
>
> In Karaf 3 all this works fine.
>
> In Karaf 4 we’re having lots of issues related to bundle refreshes, so
> we’ve been trying to figure out how to avoid them. Of course,
> “feature:install --no-auto-refresh” works, but that’s not ideal (at
> least, I don’t think it is). We thought perhaps using the dependency or
> prerequisite attributes might help.
>
> As we understand things (and this is the main point I would like to
> clarify), in Karaf 4, based on the feature declaration above:
> * (re)starting odl-ttp-model-rest would restart odl-ttp-model,
>   odl-mdsal-apidocs etc. (which is a problem because e.g.
>   odl-mdsal-broker is very likely to already be running when we install
>   another feature)
> * adding “dependency="true"” would disable this
> * adding “prerequisite="true"” would also disable this, and ensure that
>   the corresponding feature is started before odl-ttp-model-rest
>

That's wrong.  There's no relationship between how a feature is defined and
the bundle being refreshed when the feature is installed.
A bundle will be refreshed if:
  * it has been updated
  * it has been uninstalled
  * one of its dependencies is refreshed
  * a new optional dependency has been installed

When installing feature, you can use the --verbose flag which will print
why bundles are being refreshed.


>
> We hoped the second option would solve our problems, but in our
> experimentation it seems that declaring features in this way means they
> aren’t necessarily installed. (I’m hoping I can still dig up examples.)
> As in,
>
> 
> 
> 
>
> results in a distribution where
>
> feature:install a
>
> doesn’t necessarily install b. Is that supposed to be the case?
>
> We also thought that declaring prerequisites would work too, especially
> since our feature dependencies are acyclic and so it is actually
> possible to initialise all our features in a given order. But when we
> tried this we ran into stack overflows. (I’ll do it again and file
> bugs.)
>
> Does this make any sense at all?
>
> Regards,
>
> Stephen
>



-- 

Guillaume Nodet


Re: Karaf Feature vs. OBR

2017-06-14 Thread Guillaume Nodet
So if you consider an OBR as being a collection of resources, each resource
having capabilities and requirements, then a feature repository is an OBR
repository, it's just the syntax is more concise.
If you want to look at what the repository look like, you can launch the
following command in karaf:
  > feature:install --store resolution.json --verbose --simulate  scr

Then, look at the resolution.json file, it will contain the OBR repository
used by the resolver in a json format.  The xml syntax would be slightly
different of course, and a bit more verbose too, but roughly the same data.
I do think the features syntax is a bit more understandable.

But you do not want to compare OBR and features.  I haven't seen any OBR
repository used which would contain other things than just OSGi bundles.
Features is more a deployment artifact than an OSGi bundle, so it's more to
be compared with OSGi subsystems.

With pure OBR, you can't group bundles together, you usually don't want to
edit such a repository file manually, so at the end, you can never really
hack the content.  It has to be generated, and is mostly generated only
from a set of OSGi bundles.  You can't capture all the constraints by using
bundles only.

2017-06-14 7:49 GMT+02:00 David Leangen :

>
> Hi!
>
> I am trying to wrap my head around the differences between an OBR and a
> Karaf Feature. The concepts seem to be overlapping.
>
> An OBR has an index of the contained bundles, as well as meta information,
> which includes requirements and capabilities. An OBR is therefore very
> useful for resolving bundles, and partitioning bundles into some kind of
> category. It can also be versioned, and can contained different versions of
> bundles. An OBR could potentially be used to keep snapshots of system
> releases. I believe that this is somewhat how Apache ACE works. (A
> Distribution can be rolled back by simply referring to a different OBR and
> allowing the system to re-resolve.) The actual bundles need to be stored
> somewhere. The OBR index needs to provide links to that storage.
>
> A Karaf Feature is basically an index of bundles (and configurations),
> too. I think that it can also be versioned, and can contain different
> versions of bundles. Like an OBR, it is very useful for partitioning
> bundles into some kind of category, so the groups of bundles can be
> manipulated as a single unit. Just like an OBR, the Karaf Feature also
> needs to provide a link to the bundles. AFAIU, resolution is done somehow
> in Karaf, based on the bundles available via the Features, so in the end
> the entire mechanism seems almost identical to what the OBR is doing.
>
>
> So many similarities!
>
>
> I understand that a Feature can include configurations, which is nice, but
> why have a competing non-official standard against an official standard? If
> configurations is the only problem, then why not build it on top of OBRs,
> rather than creating something completely new and different and competing?
>
> Is it to try to force lock-in to Karaf? Or am I completely missing
> something?
>
>
> Thanks for explaining! :-)
>
>
> Cheers,
> =David
>
>
>


-- 

Guillaume Nodet


Re: Karaf Feature vs. OBR

2017-06-14 Thread Guillaume Nodet
Again, I'm not sure why you see features competing with OBR.
We do actually leverage OBR internally, and we can also leverage it
externally though it's not much advertised, but it was hinted by
Jean-Baptiste when he talked about Cave.

OBR is the repository specification, so it defines a Repository interface.
We do have multiple implementations of it in Karaf : the standardized XML
one, a JSON based repository implementation and an in-vm one.

A feature descriptor supports the  element. The
content of this element is an url to a OBR repository (eventually prefixed
with json: or xml:).  All features defined in the features repository will
behave as if they have the resources defined in the OBR repository with
xxx.

You can also provide a list of global repositories and configure it in
etc/org.apache.karaf.features.cfg with the resourceRepositories key (a
command separated list of urls).

Also, there's absolutely no value in the OBR bundle description compared to
a manifest.  It contains the same information in a different form and is
usually generated from the manifest.  Fwiw, when a feature has a reference
to a bundle, we do generate the OSGi Resource from the manifest directly
without using the OBR xml  description, but it's the same.

I'm really not sure what we could do to leverage OBR more...


2017-06-14 23:58 GMT+02:00 David Leangen :

>
> Hi Guillaume,
>
> Thank you for this assessment.
>
> I agree that Features adds value. Your post explains a lot of good reasons
> why this is so.
>
> My question is more about “why compete with OBR?”. Instead of embracing
> OBR and working on top of it, it seems that Features want to replace it.
> This is causing me to have to make a lot of choices in my deployment
> mechanism.
>
> Features could be really helpful for deployment by managing OBRs,
> configurations, and other deployment information. They could also manage
> versioning better etc. Maybe something like what Apache ACE was trying to
> do. However, instead of “adding” value, currently Features are completely
> replacing OBR, which I find interesting. But I understand that there is
> some legacy to this. Now that it works, it would take some momentum to move
> to a more standards-based approach.
>
>
> My current issue is: how can I use Features for Continuous Deployment? I
> am having trouble with automation. That is what got me interested in the
> idea behind the Features…
>
>
> Cheers,
> =David
>
>
>
> On Jun 15, 2017, at 6:38 AM, Guillaume Nodet  wrote:
>
> So if you consider an OBR as being a collection of resources, each
> resource having capabilities and requirements, then a feature repository is
> an OBR repository, it's just the syntax is more concise.
> If you want to look at what the repository look like, you can launch the
> following command in karaf:
>   > feature:install --store resolution.json --verbose --simulate  scr
>
> Then, look at the resolution.json file, it will contain the OBR repository
> used by the resolver in a json format.  The xml syntax would be slightly
> different of course, and a bit more verbose too, but roughly the same data.
> I do think the features syntax is a bit more understandable.
>
> But you do not want to compare OBR and features.  I haven't seen any OBR
> repository used which would contain other things than just OSGi bundles.
> Features is more a deployment artifact than an OSGi bundle, so it's more
> to be compared with OSGi subsystems.
>
> With pure OBR, you can't group bundles together, you usually don't want to
> edit such a repository file manually, so at the end, you can never really
> hack the content.  It has to be generated, and is mostly generated only
> from a set of OSGi bundles.  You can't capture all the constraints by using
> bundles only.
>
> 2017-06-14 7:49 GMT+02:00 David Leangen :
>
>>
>> Hi!
>>
>> I am trying to wrap my head around the differences between an OBR and a
>> Karaf Feature. The concepts seem to be overlapping.
>>
>> An OBR has an index of the contained bundles, as well as meta
>> information, which includes requirements and capabilities. An OBR is
>> therefore very useful for resolving bundles, and partitioning bundles into
>> some kind of category. It can also be versioned, and can contained
>> different versions of bundles. An OBR could potentially be used to keep
>> snapshots of system releases. I believe that this is somewhat how Apache
>> ACE works. (A Distribution can be rolled back by simply referring to a
>> different OBR and allowing the system to re-resolve.) The actual bundles
>> need to be stored somewhere. The OBR index needs to provide links to that
>> storage.
>>
>> A Ka

Re: Complex config

2017-06-14 Thread Guillaume Nodet
FileInstall is pluggable and can support additional file format.
For this, you need to register a org.apache.felix.fileinstall.
ArtifactUrlTransformer
or org.apache.felix.fileinstall.ArtifactInstaller class in the OSGi
registry.  We do leverage that in Karaf to provide support for blueprint
and spring xml files, on-the-fly jar -> bundle, kar files and feature
repositories.

This does not directly plug into the ConfigAdmin, so you'd have to do that
yourself.


2017-06-14 22:49 GMT+02:00 dynamodan :

> I found this thread because of a similar need -- I store a configuration in
> yaml format for a java application that I'm "OSGi-fying".  The flat
> dictionary isn't going to work for me, nor is the
> `${karaf.etc}/worker-config.yaml` method going to work (it won't monitor
> the
> worker-config.yaml for updates!).
>
> +jbonofre, you mentioned something about an adapter:
>
>
> jbonofre wrote
> > What do you mean exactly ? You want to load the yaml configuration in
> > ConfigAdmin ? In that case, it's possible in an adapter.
>
> That would be cool, but if it is even simpler, let's be format-agnostic,
> and
> just implement and register a ManagedService.updated() function that gets
> run any time the filesystem detects a change (along the lines of
> java.nio.file.WatchService).  Send a null java.util.Dictionary object and
> let the function sort out what changed, if anything.
>
> Thanks in advance!
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Complex-config-tp4043584p4050746.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: Karaf Feature vs. OBR

2017-06-15 Thread Guillaume Nodet
2017-06-15 9:00 GMT+02:00 David Leangen :

>
> Thanks Guillaume. A lot of good food for thought.
>
> Again, I'm not sure why you see features competing with OBR.
>
>
> Coming from bnd/EnRoute, in my build environment I can create different
> OBRs, and “release" to them. I can use a different OBR per workspace, which
> means that I can develop each “feature" separately, and release it to its
> own OBR. Thus, an OBR defines a “feature”.
>

No, it defines a repository, and that's fine.  A feature is more than a
list of bundles.
OBR is useful when you want to install individual bundles.
For example, if you want to deploy a web application, you could write a
feature which will have a dependency on the karaf "war" feature.  When you
install your feature, it will ensure the web app support is deployed
correctly.
With OBR, you'll have a dependency on the servlet api, so the servlet api
bundle will be deployed, and that's all.
Of course, if you're only interested in the simple use cases, you can use
an OBR repo and it will work.

If you want to experiment deployment based on an OBR repo + a requirement,
you could use the following file and drop it in the deploy folder:

http://karaf.apache.org/xmlns/features/v1.4.0";>
[url to the OBR repo]

[the requirement]



If that's really all you need, it should be very easy to write a small
karaf command that would do the same with the 3 parameters: a name, and obr
repo url and a requirement.



>
> What I would like to be able to do is simply push the OBR into my
> container, without having any extra layer. When I tried in the past, there
> were some bugs and it did not work out very successfully. Maybe things have
> changed since then…
>
> Using Apache ACE as an example, an OBR can be used in a way that is very
> similar to a Feature. It works extremely well in the EnRoute environment,
> and cuts down a lot of noise, IMO.
>

I haven't used ACE since a long time.  It has very strong limitations
because it's based on DeploymentAdmin which do not support deploying a
bundle with multiple versions.  This is a show-stopper for me.


>
> A feature descriptor supports the  element. The
> content of this element is an url to a OBR repository (eventually prefixed
> with json: or xml:).  All features defined in the features repository will
> behave as if they have the resources defined in the OBR repository with
> xxx.
>
>
> Ok, either I did not know that, or I forgot about that. I’ll take a look.
> IIRC I think this is what didn’t work for me when I tried some time ago.
>

I'd be happy to fix any bug.


>
> You can also provide a list of global repositories and configure it in
> etc/org.apache.karaf.features.cfg with the resourceRepositories key (a
> command separated list of urls).
>
>
> The problem with this approach is, unless something has changed, I have to
> restart my container each time there is a change. I would also have to
> figure out a way to push the changes to Karaf. Perhaps this is easier than
> I thought, but I did not find a good way last time I look into this.
>

Yes, that's a limited support.  The reason is see problem in using this
with not much value.  The first option is preferrable in Karaf imho.


>
> Also, there's absolutely no value in the OBR bundle description compared
> to a manifest.  It contains the same information in a different form and is
> usually generated from the manifest.  Fwiw, when a feature has a reference
> to a bundle, we do generate the OSGi Resource from the manifest directly
> without using the OBR xml  description, but it's the same.
>
>
> True, but then again my understanding is that a properly curated OBR
> should provide a set of bundles, and should not change (which is why the ID
> gets updated each time there is even a minor change). The information could
> very well come from the bundles, but if the bundles don’t change and the
> index is trusted, then the pre-parsed manifest info is already in the
> index, so it’s a duplicate effort to redo the parsing. No?
>

Actually, I think it's faster to parse a manifest than to the OBR xml,
given the verbosity of the xml. Plus, it's an additional file to manage, so
it has to provide some value, else it's not worth the pain.


>
>
> Perhaps the Karaf/Maven way of thinking is very different from the bnd
> way? Or maybe there has been convergence over the past few years, but the
> tooling has not kept up? (That is what I am trying to figure out, since I
> don’t know Maven very deeply, and based on what I understand, I think I
> prefer the bnd way.)
>

Yeah, karaf tooling is definitely lacking I think.


>
>
> Cheers,
> =David
>
>
>
> On Jun 15, 2017,

Re: Automatic Upgrade Installer for Karaf

2017-06-21 Thread Guillaume Nodet
2017-06-21 18:17 GMT+02:00 Hari Madhavan :

> Hi ,
>
> I have developed an automatic upgrade installer , that would always pull
> down the latest features from a nexus repository as a night-time scheduled
> action. I had initially scripted it in karaf, but then implemented it in
> Java using karaf services.
>
> The application is built as a set of features ( 18 to be exact ) and I
> have  a summary feature ( my-app feature)  that includes all the other
> features , except the upgrade-manager feature which has a self-upgrading as
> well as my-app upgrading capability
>
> A few of items are making it difficult to get it to work exactly as I want
> - and I wanted to know what could be wrong in my assumption/approach
>
>1. While most of the features and bundles can be resolved based on
>karaf's resolution schemes , there are a couple of features whose bundles
>should be started only in the end - they depend on some interface bundles
>but work with implementations of the interfaces which should be available
>when they start - however even if these features are listed in the end of
>the feature definition all the bundles are started in alphabetic order.
>Eventually I have used bundle start-level to force these features to start
>last
>
>
You need to use service dependencies and track them using service trackers
in order to get rid of this ordering problem.


>
>1. Sometimes data migration needs to be done for features that are
>upgraded before they are started. So I would prefer to have the feature to
>be installed in stopped mode and then start it after migration. However
>even though I have provided the  Option.NoAutoStartBundles enum option
>for FeatureService.installFeature, the bundles are always started on
>installation .
>
> MMh, I think this only affect newly installed features.  You could try to
stop the feature using FeaturesService.updateFeaturesState , run the
deployer and start them again maybe.

>
>1. All the bundles are shutdown before the upgrade to ensure that
>there is no other activity during upgrade. However the very act of shutting
>down the my-app feature , causes many bundles in other features ( including
>the upgrade-manager which is completely isolated from the rest of my-app )
>to restart themselves. How can this be prevented ?
>
> There are 2 different things.  Stopping a bundle and updating a bundle.
Bundles are not stopped by the deployer unless an update or a refresh is
required.  Try with the --verbose flag to get an idea about why bundles are
refreshed.


> I am using Karaf 4.0.5 .
>
> I'd really appreciate insights/thoughts on these points.
>
> Regards
> Hari
>
> --
> Hari
> 9845147731 <(984)%20514-7731>
>



-- 

Guillaume Nodet


Re: Karaf Container 4.0.x / minimum configuration

2017-06-21 Thread Guillaume Nodet
If you don't need the dynamism of OSGi / Karaf, you could build your custom
static distribution.  It would trim down the requirements because the
feature resolution would not need to happen and because bundles can be
installed directly from the repository without being copied to the cache
folder.

2017-06-21 18:26 GMT+02:00 Jean-Baptiste Onofré :

> Hi JL,
>
> Karaf 4.0.x requires:
> - Java 8
> - 512M RAM
> - 50MB free filesystem space
>
> What do you have installed on Karaf (additional feature) ? On which OS ?
>
> Regards
> JB
>
>
> On 06/21/2017 04:55 PM, jljordan wrote:
>
>> Hi,
>> I am looking for the minimum recommanded configuration (HW requirements:
>> Flash and RAM)
>> for Karaf Container 4.0.x .
>> In the Karaf User's Guide, there is that information:
>> 50 MB of free disk space for the Apache Karaf binary distribution.
>> Does it exist others recommandations?
>> I ask that question cause we use Karaf Container 4.0.x and JDK 1.8,
>> and we noticed some slow-down in the execution program and a very high cpu
>> occupancy.
>> If you have some clues to check that the configuration is correctly set,
>> it is interesting me.
>>
>> Thanks in advance for your help,
>> JL Jordan
>>
>>
>>
>> --
>> View this message in context: http://karaf.922171.n3.nabble.
>> com/Karaf-Container-4-0-x-minimum-configuration-tp4050843.html
>> Sent from the Karaf - User mailing list archive at Nabble.com.
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



-- 

Guillaume Nodet


Re: Problems with karaf shell and 4.1.2-SNAPSHOT

2017-06-22 Thread Guillaume Nodet
Please raise a JIRA, I'll have a look.

2017-06-22 10:21 GMT+02:00 Patrick Magon :

> Hello,
>
> With 4.1.2-SNAPSHOT, I have the following (and annoying) behavior:
> when I quit the karaf shell  (using ctrl+C or ctrl+D), I have to use the
> "reset" command to fix my terminal (messed up).
>
> This happens only when using "bin/start" and "bin/client" (works with
> "bin/karaf")
> I'm on linux (same behavior with xterm and "terminator")
>
> The problem does not occurs with 4.1.0.
>
> Any clue ?
> Thanks
>
> Patrick
>
>


-- 

Guillaume Nodet


Re: Problems with karaf shell and 4.1.2-SNAPSHOT

2017-06-23 Thread Guillaume Nodet
This is unrelated.
If you can reproduce the issue, please raise a JIRA and explains the steps
needed to reproduce.

2017-06-23 12:24 GMT+02:00 Nicolas Brasey :

> Hi,
>
> I've also a similar issue when using feature:install or feature:uninstall,
> it regularly exits the karaf shell. The command is properly executed but
> the shell exits.
>
> This is also happening on 4.1.2-SNAPSHOT
>
> Cheers,
> Nicolas
>
>
>
>
>
> On Fri, Jun 23, 2017 at 11:18 AM, Jean-Baptiste Onofré 
> wrote:
>
>> Hi Patrick,
>>
>> it sounds like a bug to me. Let me try to reproduce and fix it.
>>
>> Thanks,
>> Regards
>> JB
>>
>> On 06/23/2017 10:47 AM, Patrick Magon wrote:
>>
>>> Hi,
>>>
>>> I tested the fix KARAF-5216 <https://issues.apache.org/jir
>>> a/browse/KARAF-5216> and it works... thanks.
>>>
>>> I have another question about karaf shell (bin/client): with previous
>>> versions (before 4.1.0)  a CTRL + C did NOT exit the terminal.
>>> Since 4.1.0, a CTRL + C exit the karaf shell (ONLY in bin/client) .
>>> If I connect with ssh or starting using "bin/karaf", a CTRL+C will not
>>> exist.
>>>
>>> This is a little annoying: I'm used to use CTRL+C when I misstyped
>>> sometthing.
>>>
>>> Is this a bug or by design ?
>>>
>>> Thanks
>>>
>>> Patrick
>>>
>>> On Thu, Jun 22, 2017 at 12:13 PM, Patrick Magon >> <mailto:pma...@gmail.com>> wrote:
>>>
>>> I created KARAF-5216 <https://issues.apache.org/jir
>>> a/browse/KARAF-5216>
>>>
>>> Thanks
>>>
>>> On Thu, Jun 22, 2017 at 11:51 AM, Guillaume Nodet >> <mailto:gno...@apache.org>> wrote:
>>>
>>> Please raise a JIRA, I'll have a look.
>>>
>>> 2017-06-22 10:21 GMT+02:00 Patrick Magon >> <mailto:pma...@gmail.com>>:
>>>
>>> Hello,
>>>
>>> With 4.1.2-SNAPSHOT, I have the following (and annoying)
>>> behavior:
>>> when I quit the karaf shell  (using ctrl+C or ctrl+D), I
>>> have to use the
>>> "reset" command to fix my terminal (messed up).
>>>
>>> This happens only when using "bin/start" and "bin/client"
>>> (works
>>> with "bin/karaf")
>>> I'm on linux (same behavior with xterm and "terminator")
>>>
>>> The problem does not occurs with 4.1.0.
>>>
>>> Any clue ?
>>> Thanks
>>>
>>> Patrick
>>>
>>>
>>>
>>>
>>> -- 
>>> Guillaume Nodet
>>>
>>>
>>>
>>>
>> --
>> Jean-Baptiste Onofré
>> jbono...@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>
>


-- 

Guillaume Nodet


Re: Problems with karaf shell and 4.1.2-SNAPSHOT

2017-06-23 Thread Guillaume Nodet
Could you raise a JIRA please ?
This is clearly a regression and not the desired behavior.

2017-06-23 10:47 GMT+02:00 Patrick Magon :

> Hi,
>
> I tested the fix  KARAF-5216
> <https://issues.apache.org/jira/browse/KARAF-5216> and it works... thanks.
>
> I have another question about karaf shell (bin/client): with previous
> versions (before 4.1.0)  a CTRL + C did NOT exit the terminal.
> Since 4.1.0, a CTRL + C exit the karaf shell (ONLY in bin/client) .
> If I connect with ssh or starting using "bin/karaf", a CTRL+C will not
> exist.
>
> This is a little annoying: I'm used to use CTRL+C when I misstyped
> sometthing.
>
> Is this a bug or by design ?
>
> Thanks
>
> Patrick
>
> On Thu, Jun 22, 2017 at 12:13 PM, Patrick Magon  wrote:
>
>> I created KARAF-5216 <https://issues.apache.org/jira/browse/KARAF-5216>
>>
>> Thanks
>>
>> On Thu, Jun 22, 2017 at 11:51 AM, Guillaume Nodet 
>> wrote:
>>
>>> Please raise a JIRA, I'll have a look.
>>>
>>> 2017-06-22 10:21 GMT+02:00 Patrick Magon :
>>>
>>>> Hello,
>>>>
>>>> With 4.1.2-SNAPSHOT, I have the following (and annoying) behavior:
>>>> when I quit the karaf shell  (using ctrl+C or ctrl+D), I have to use the
>>>> "reset" command to fix my terminal (messed up).
>>>>
>>>> This happens only when using "bin/start" and "bin/client" (works with
>>>> "bin/karaf")
>>>> I'm on linux (same behavior with xterm and "terminator")
>>>>
>>>> The problem does not occurs with 4.1.0.
>>>>
>>>> Any clue ?
>>>> Thanks
>>>>
>>>> Patrick
>>>>
>>>>
>>>
>>>
>>> --
>>> 
>>> Guillaume Nodet
>>>
>>>
>>
>


-- 

Guillaume Nodet


Re: Conditional Features and capabilities

2017-06-29 Thread Guillaume Nodet
Actually, requirements are supported in conditionals.
You need to prepend "req:" in front of the requirement, see the following
example:

https://github.com/apache/karaf/blob/master/assemblies/features/standard/src/main/feature/feature.xml#L267

Guillaume

2017-06-29 14:55 GMT+02:00 jeremie.bre...@gmail.com <
jeremie.bre...@gmail.com>:

> Hi,
>
> I have two features implementing the same capability (and I can only
> install one of both - For example a capability "database" and two features,
> "derby" and "H2").
>
> I have also a third feature, "web-user-interface", and I want that, when
> this feature and the capability defined earler are installed, then another
> set of bundles are automatically installed.
>
> The "conditional" in the features.xml seems to allow this kind of
> behavior, but it looks like I can only specified a feature as a dependency,
> and not a capability.
>
> How can I implement this scenario with Karaf 4.1?
>
> Regards,Jérémie
>



-- 

Guillaume Nodet


Re: Pax-JDBC 1.1 hikari pool and XA Support

2017-07-10 Thread Guillaume Nodet
I don't recommand using non XA specific pooling mechanism for JDBC support,
unless you don't care about the ability to recover in-flight transactions
(which doesn't play well with wanting XA).
If you're using the geronimo/aries transaction manager, you may want to use
the pax-jdbc-pool-aries instead which will support recovery.

Guillaume

2017-07-10 17:03 GMT+02:00 sahlex :

> Hello.
>
> When using pax-jdbc 1.1.0 (on karaf 4.0.8) to be able to use hikari pool
> support everything is working fine until I switch on XA support.
> In this case the DataSource is not created! When I comment out the XA
> support from the datasource factory config file, the DataSource pops up
> again.
>
> I have transaction service 2.1.0 deployed (tried with 1.1.1 as well):
> list | grep Trans
> 164 | Active | 80 | 2.1.0 | Apache Aries Transaction Blueprint
> 165 | Active | 80 | 1.3.1 | Apache Aries Transaction Manager
>
> My configuration file:
> osgi.jdbc.driver.name = mariadb
> dataSourceName = whatever
> databaseName = whatever
> user = xxx
> password = xxx
> pool = hikari
> # xa = true
> hikari.maximumPoolSize = 200
> hikari.connectionTimeout = 400
> url = jdbc:mariadb:failover://1.2.3.4/bam?characterEncoding=UTF-8&
> useServerPrepStmts=true
>
> service:list DataSource
> [javax.sql.DataSource]
> --
> databaseName = whatever
> dataSourceName = whatever
> felix.fileinstall.filename = file:/opt/
> hikari.connectionTimeout = 400
> hikari.maximumPoolSize = 200
> osgi.jdbc.driver.name = mariadb
> osgi.jndi.service.name = whatever
> password = xxx
> service.bundleid = 126
> service.factoryPid = org.ops4j.datasource
> service.id = 391
> service.pid = org.ops4j.datasource.f349f611-9c2e-48a7-8ac0-3789a8f5dd66
> service.scope = singleton
> url = jdbc:mariadb:failover://172.17.42.50:3309/whatever?character
> Encoding=UTF-8&useServerPrepStmts=true
> user = xxx
> Provided by :
> OPS4J Pax JDBC Config (126)
>
> Regards, Alexander
>
>
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Pax-JDBC-1-1-hikari-pool-and-XA-Support-tp4050977.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: Re: Pax-JDBC 1.1 hikari pool and XA Support

2017-07-11 Thread Guillaume Nodet
The hikari doc says "Note XA data sources are not supported." on
https://github.com/brettwooldridge/HikariCP ...

I think you need to set the pool.name property in the config for the
datasource to be wrapped.

2017-07-11 9:15 GMT+02:00 :

> I thought I was using Aries Transaction Manager still?
> Does it mean, that hikari pooling does not support XA?
> When I use aries pooling, it complains about "Unable to recover
> XADataSource: aries.xa.name property not set". I tried to set it in the
> pax-jdbc datasource file but it doesn't seem to make use of it.
>
> Alexander
>
>
> >>>
> Another option would be to look at Apache Aries Transaction control. That
> provides a simple, effective model for connection pooling, resource
> lifecycle management, and transaction enlistment. It will also be the
> reference implementation of the OSGi Transaction Control specification when
> OSGi R7 goes final.
>
> Tim
>
> Sent from my iPhone
>
> On 10 Jul 2017, at 16:18, Guillaume Nodet  wrote:
>
> I don't recommand using non XA specific pooling mechanism for JDBC
> support, unless you don't care about the ability to recover in-flight
> transactions (which doesn't play well with wanting XA).
> If you're using the geronimo/aries transaction manager, you may want to
> use the pax-jdbc-pool-aries instead which will support recovery.
>
> Guillaume
>
> 2017-07-10 17:03 GMT+02:00 sahlex :
>
>> Hello.
>>
>> When using pax-jdbc 1.1.0 (on karaf 4.0.8) to be able to use hikari pool
>> support everything is working fine until I switch on XA support.
>> In this case the DataSource is not created! When I comment out the XA
>> support from the datasource factory config file, the DataSource pops up
>> again.
>>
>> I have transaction service 2.1.0 deployed (tried with 1.1.1 as well):
>> list | grep Trans
>> 164 | Active | 80 | 2.1.0 | Apache Aries Transaction Blueprint
>> 165 | Active | 80 | 1.3.1 | Apache Aries Transaction Manager
>>
>> My configuration file:
>> osgi.jdbc.driver.name = mariadb
>> dataSourceName = whatever
>> databaseName = whatever
>> user = xxx
>> password = xxx
>> pool = hikari
>> # xa = true
>> hikari.maximumPoolSize = 200
>> hikari.connectionTimeout = 400
>> url = jdbc:mariadb:failover://1.2.3.4/bam?characterEncoding=UTF-8&
>> useServerPrepStmts=true
>>
>> service:list DataSource
>> [javax.sql.DataSource]
>> --
>> databaseName = whatever
>> dataSourceName = whatever
>> felix.fileinstall.filename = file:/opt/
>> hikari.connectionTimeout = 400
>> hikari.maximumPoolSize = 200
>> osgi.jdbc.driver.name = mariadb
>> osgi.jndi.service.name = whatever
>> password = xxx
>> service.bundleid = 126
>> service.factoryPid = org.ops4j.datasource
>> service.id = 391
>> service.pid = org.ops4j.datasource.f349f611-9c2e-48a7-8ac0-3789a8f5dd66
>> service.scope = singleton
>> url = jdbc:mariadb:failover://172.17.42.50:3309/whatever?character
>> Encoding=UTF-8&useServerPrepStmts=true
>> user = xxx
>> Provided by :
>> OPS4J Pax JDBC Config (126)
>>
>> Regards, Alexander
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://karaf.922171.n3.nabble.
>> com/Pax-JDBC-1-1-hikari-pool-and-XA-Support-tp4050977.html
>> Sent from the Karaf - User mailing list archive at Nabble.com.
>>
>
>
>
> --
> 
> Guillaume Nodet
>
>


-- 

Guillaume Nodet


Re: Re: Pax-JDBC 1.1 hikari pool and XA Support

2017-07-11 Thread Guillaume Nodet
Try with pool.poolMinSize and pool.poolMaxSize ...
Basically, prefix all the following properties with "pool." :

https://github.com/apache/aries/blob/trunk/transaction/transaction-jdbc/src/main/java/org/apache/aries/transaction/jdbc/RecoverableDataSource.java#L50-L64

Guillaume

2017-07-11 10:02 GMT+02:00 sahlex :

> Thanks for the enlightenment. I totally overlooked that part of the hikari
> documentation.
> Using pool.name works perfectly!
>
> Besides, I didn't  find a way to set other properties than the name, like
> MaxSize. Is it possible somehow (maybe I should open another topic)?
>
> Thanks, Alexander.
>
>
>
> --
> View this message in context: http://karaf.922171.n3.nabble.
> com/Pax-JDBC-1-1-hikari-pool-and-XA-Support-tp4050977p4050985.html
> Sent from the Karaf - User mailing list archive at Nabble.com.
>



-- 

Guillaume Nodet


Re: Features with conditional requirements

2017-07-12 Thread Guillaume Nodet
The best way is to produce a unit test with a minimal set of features /
bundles.
You can see such a test here:

https://github.com/apache/karaf/blob/master/features/core/src/test/java/org/apache/karaf/features/internal/region/SubsystemTest.java#L191-L210
with the data:

https://github.com/apache/karaf/tree/master/features/core/src/test/resources/org/apache/karaf/features/internal/region/data5




2017-07-12 14:47 GMT+02:00 jeremie.bre...@gmail.com <
jeremie.bre...@gmail.com>:

> Hello,
>
> I am playing with conditionals features, but I am not able to do what I
> want.
>
> I have theses features (with Karaf 4.1.1 ):
>
> 
>   
> req:component.service.faultmanagement
> mvn:MyUIBundle
>   
> 
>
> 
>   ...
>   
> component.service.faultmanagement
>   
> 
>
> When I install "ui" and "faultmanagement", I am expecting "MyUIBundle" to
> be installed. However, this is not the case.
>
> What I am doing wrong ? The resolver code is a bit complicated, how can I
> debug the behavior of such features ?
>
> Regards,
> Jérémie
>
>


-- 

Guillaume Nodet


Re: Writing commands for karaf shell.

2017-07-21 Thread Guillaume Nodet
If you look at Karaf >= 4.1.x, a bunch of commands are not coming from
Karaf anymore, but from Gogo or JLine.  I moved them when working on the
gogo / jline3 integration.  The main point that was blocking imho is that
they did not have completion support.  With the new fully scripted
completion system from gogo-jline, gogo commands can have full completion,
so I don't see any blocking points anymore.  It's just about tracking
commands and registering them in the karaf shell.

2017-07-21 15:27 GMT+02:00 Christian Schneider :

> On 21.07.2017 12:27, t...@quarendon.net wrote:
>
>> Yes, but what's the actual situation from a standards point of view?
>> Is a shell defined by a standard at all? OSGi enroute seems to require a
>> gogo shell and appears to rely on felix gogo shell command framework.
>> Is it just that Karaf happens to ship a shell that happens to be based on
>> the felix gogo shell (or perhaps not, but stack traces seem to suggest so),
>> but that basically if I want to implement a shell command I have to
>> implement it differently for each shell type?
>>
>> That seems a poor situation and leaves me with having to implement one
>> command implementation to be used in the development environment and one
>> that is used in the karaf deployment.
>>
>> Originally I thought that Karaf was the "enterprise version of felix".
>> This doesn't seem to be the case?
>>
>> There *could* be a really powerful environment and ecosystem here, if it
>> was all a *little* bit less fragmented :-)
>>
> I fully agree that we need to work towards more common approaches. The
> OSGi ecosystem is too small to afford being fragmented like this.
> We all have the chance and duty to work on improving this though.
>
> Christian
>
> --
> Christian Schneider
> http://www.liquid-reality.de
>
> Open Source Architect
> http://www.talend.com
>
>


-- 

Guillaume Nodet


Re: Writing commands for karaf shell.

2017-07-21 Thread Guillaume Nodet
Right now, stick with the Karaf annotations if you want to deploy your
commands in Karaf.  It works and it's supported.
My previous comment was more a reply to Christian about what needs to be
done to support gogo commands in the future.

2017-07-21 17:27 GMT+02:00 :

> > If you look at Karaf >= 4.1.x, a bunch of commands are not coming from
> > Karaf anymore, but from Gogo or JLine.  I moved them when working on the
> > gogo / jline3 integration.  The main point that was blocking imho is that
> > they did not have completion support.  With the new fully scripted
> > completion system from gogo-jline, gogo commands can have full
> completion,
> > so I don't see any blocking points anymore.  It's just about tracking
> > commands and registering them in the karaf shell.
>
> I'm sorry, but I don't really understand what you're saying. You're
> talking about impediments to making changes to Karaf? Or how I go about
> writing commands?
> Sorry, just not following.
>
> Fundamentally, should commands that I write using apache felix gogo
> command features such as the Parameter and Description annotations, and the
> CommandService interface work? Or if I want to do something other than a
> simple "hello world", do I need to work out how to use the karaf shell from
> within bndtools so that I can write commands using the Karaf command
> framework?
>
> Thanks.
>



-- 

Guillaume Nodet


Re: Writing commands for karaf shell.

2017-07-22 Thread Guillaume Nodet
2017-07-22 12:16 GMT+02:00 Tim Ward :

> Sorry to wind this back a little, but there were a couple of questions
> from Tom which got skipped over.
>
> I'm afraid that when it comes to shells there isn't a standard. There was
> an RFC created a long time ago, which roughly represented the work that is
> now Gogo. There was a decision at the time that there wasn't a need for a
> standard, this decision could be revisited, particularly if someone wants
> to drive the work through the Alliance.
>
> As for the following question:
>
> Originally I thought that Karaf was the "enterprise version of felix".
> This doesn't seem to be the case?
>
>
> Karaf and Felix may both be hosted at Apache, but Karaf is a totally
> separate project from Felix with a very different ethos. Karaf does not
> implement an OSGi framework, or OSGi standards, but builds a server based
> on OSGi components from a variety of places.
>
> Karaf is flexible, but ultimately opinionated about libraries and dictates
> a number of high level choices. Felix works hard to allow you to use
> implementations from anywhere with the standalone components they produce.
>
> Karaf is also prepared to invent concepts (e.g. features and kar files)
> and not contribute them back to OSGi, leaving them as proprietary
> extensions. This even happens when OSGi standards do exist (or are nearly
> final). Karaf also promotes non standard (and some non Apache) programming
> model extensions.
>

May I point out that there's no such thing like "contributing back to
OSGi".  The OSGi Alliance is not an open source organization, where people
are free to contribute.  It's a paying membership organization, so that's
very opposite to the ASF way of doing things.  The only thing that can
exist is that some OSGi Alliance members contribute to an OSS project and
also work in the OSGi Alliance on related things, but there's no way for an
open source project to contribute anything per se.  It happens that most
Karaf developpers are not OSGi Alliance members, so they don't have a
saying to what happens there...


>
> While this does, by some measures, make Karaf a "bad" OSGi citizen, it is
> also one of the reasons why Karaf is so successful, and helps to drive OSGi
> adoption (a very good thing for OSGi). By being opinionated Karaf can be
> simpler for new users, even if it provides a more limited view of what your
> OSGi options are. The Felix framework, on the other hand, lets you make all
> the decisions, but also requires you to make all the decisions!
>
> In summary I would describe Karaf as an Open Source OSGi server runtime,
> where Felix is more like a base operating system.
>
> Tim
>
> Sent from my iPhone
>
> On 22 Jul 2017, at 06:44, Christian Schneider 
> wrote:
>
> That sounds interesting. Can you point us to the code where those commands
> are implemented and where the completion is defined?
> I know there is the completion support that you can define in the shell
> init script but I think this is difficult to maintain this way.
>
> Is it now possible to somehow define the completion for gogo commands per
> bundle or even by annotations directly on the class?
>
> Christian
>
> 2017-07-21 16:57 GMT+02:00 Guillaume Nodet :
>
>> If you look at Karaf >= 4.1.x, a bunch of commands are not coming from
>> Karaf anymore, but from Gogo or JLine.  I moved them when working on the
>> gogo / jline3 integration.  The main point that was blocking imho is that
>> they did not have completion support.  With the new fully scripted
>> completion system from gogo-jline, gogo commands can have full completion,
>> so I don't see any blocking points anymore.  It's just about tracking
>> commands and registering them in the karaf shell.
>>
>> 2017-07-21 15:27 GMT+02:00 Christian Schneider :
>>
>>> On 21.07.2017 12:27, t...@quarendon.net wrote:
>>>
>>>> Yes, but what's the actual situation from a standards point of view?
>>>> Is a shell defined by a standard at all? OSGi enroute seems to require
>>>> a gogo shell and appears to rely on felix gogo shell command framework.
>>>> Is it just that Karaf happens to ship a shell that happens to be based
>>>> on the felix gogo shell (or perhaps not, but stack traces seem to suggest
>>>> so), but that basically if I want to implement a shell command I have to
>>>> implement it differently for each shell type?
>>>>
>>>> That seems a poor situation and leaves me with having to implement one
>>>> command implementation to be used in the development environment and one
>>>> th

Re: Writing commands for karaf shell.

2017-07-22 Thread Guillaume Nodet
2017-07-22 7:44 GMT+02:00 Christian Schneider :

> That sounds interesting. Can you point us to the code where those commands
> are implemented and where the completion is defined?
> I know there is the completion support that you can define in the shell
> init script but I think this is difficult to maintain this way.
>
> Is it now possible to somehow define the completion for gogo commands per
> bundle or even by annotations directly on the class?
>

No, I was referring to the scripting way of defining completion.  Given a
gogo command is only defined by implementing the Function interface, the
completion mechanism has to be defined externally.


>
> Christian
>
> 2017-07-21 16:57 GMT+02:00 Guillaume Nodet :
>
>> If you look at Karaf >= 4.1.x, a bunch of commands are not coming from
>> Karaf anymore, but from Gogo or JLine.  I moved them when working on the
>> gogo / jline3 integration.  The main point that was blocking imho is that
>> they did not have completion support.  With the new fully scripted
>> completion system from gogo-jline, gogo commands can have full completion,
>> so I don't see any blocking points anymore.  It's just about tracking
>> commands and registering them in the karaf shell.
>>
>> 2017-07-21 15:27 GMT+02:00 Christian Schneider :
>>
>>> On 21.07.2017 12:27, t...@quarendon.net wrote:
>>>
>>>> Yes, but what's the actual situation from a standards point of view?
>>>> Is a shell defined by a standard at all? OSGi enroute seems to require
>>>> a gogo shell and appears to rely on felix gogo shell command framework.
>>>> Is it just that Karaf happens to ship a shell that happens to be based
>>>> on the felix gogo shell (or perhaps not, but stack traces seem to suggest
>>>> so), but that basically if I want to implement a shell command I have to
>>>> implement it differently for each shell type?
>>>>
>>>> That seems a poor situation and leaves me with having to implement one
>>>> command implementation to be used in the development environment and one
>>>> that is used in the karaf deployment.
>>>>
>>>> Originally I thought that Karaf was the "enterprise version of felix".
>>>> This doesn't seem to be the case?
>>>>
>>>> There *could* be a really powerful environment and ecosystem here, if
>>>> it was all a *little* bit less fragmented :-)
>>>>
>>> I fully agree that we need to work towards more common approaches. The
>>> OSGi ecosystem is too small to afford being fragmented like this.
>>> We all have the chance and duty to work on improving this though.
>>>
>>> Christian
>>>
>>> --
>>> Christian Schneider
>>> http://www.liquid-reality.de
>>>
>>> Open Source Architect
>>> http://www.talend.com
>>>
>>>
>>
>>
>> --
>> 
>> Guillaume Nodet
>>
>>
>
>
> --
> --
> Christian Schneider
> http://www.liquid-reality.de
> <https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46&URL=http%3a%2f%2fwww.liquid-reality.de>
>
> Open Source Architect
> http://www.talend.com
> <https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46&URL=http%3a%2f%2fwww.talend.com>
>



-- 

Guillaume Nodet


Re: Karaf Feature vs. OBR

2017-08-31 Thread Guillaume Nodet
Fwiw, bundle:watch is very specific and only check the local repository, so
it won't work if you upload a new snapshot on a remote repository.

2017-08-31 20:44 GMT+02:00 Steinar Bang :

> >>>>> Steinar Bang :
> >>>>> Steinar Bang :
> >>>>> Jean-Baptiste Onofré :
>
> >>> By the way, I'm adding "deployment tooling" in Cave right now
> >>> (allowing to explode a kar, create meta feature assembling existing
> >>> features, ...).
>
> >> Interesting!  Will study!
> >> https://karaf.apache.org/manual/cave/latest-4/
>
> > So now I see three ways to do continous delivery:
> >  1. Snapshot builds from travis, triggered by github changes, deploying
> > to a packagecloud account[1] (a free package cloud account will
> > probably serve my humble needs...)
>
> I tried this: I created a 14 day trial account on packagecloud, and
> tried to deploy my code there.  That failed because the server side at
> packagecloud didn't like maven artifact attachment with extension .xml.
>
> This means that all of my bundle projects (which have a karaf feature
> attached) as well as the project with packaging pom, that aggregates all
> of my bundles' features into a feature repository, failed to deploy.
>
> The packagecloud people filed a bug on that but couldn't promise to have
> it fixed during what was left of my trial period.
>
> >  2. Installing jenkins on the same server karaf is running, and using
> > cave to refer to the /home/jenkins/.m2/repository
> >  3. Running jenkins on a different computer and rsync the
> > /home/jenkins/.m2/repository to the server karaf is running and use
> > cave to make karaf listen to /home/jenkins/.m2/repository (I tried
> > rsyncing my manual builds from my own account on a different
> > computer to the karaf account on the karaf server,
> > ie. /home/karaf/.m2/repository, but that got a lot of issues
> > wrt. file ownership and access. Doing it to the same account and on
> > a different account from the karaf account should resolve this)
>
> I ended up doing none of the above.  I ended up doing:
>   4. Set up an ftp server on the same machine where karaf was running,
>  and set up travis-ci to deploy to that repository, and then adding
>  the file-URL pointing to the deployment area as a maven repository
>  in the karaf config.  After a travis-ci deployment I have to log
>  into the karaf console and do "bundle:update" on each bundle I
>  would like refreshed
>
> This works, and is a lot simpler than what I did before.  But it's still
> not fully automated (I still have to do the manual "bundle:update"
> commands).
>
>
>


-- 

Guillaume Nodet


Re: Gogo scripts and feature:list

2017-09-05 Thread Guillaume Nodet
Try the following:
  featuresCount = ${${=$(feature:list -r | grep xxx | wc -l)}[1]}
The ${=...} will turn the output in an array, and ${...[1]} will grab the
first element.
See
http://zsh.sourceforge.net/Doc/Release/Expansion.html#Parameter-Expansion

2017-09-05 11:04 GMT+02:00 Jean-Baptiste Onofré :

> Hi Jérémie,
>
> I think you can go directly with the features list:
>
> features = $.context features) | grep -i xxx | tac
> if features.size() ...
>
> Regards
> JB
>
> On 09/05/2017 10:54 AM, J. Brebec wrote:
>
>> Hello,
>>
>> I want to detect, in a gogo script, if at least N features are installed
>> in Karaf (in order to show an "help" message).
>>
>> I am trying to do something like :
>>
>> featuresCount = (features:list -r | grep xxx | wc -l)
>>
>> however, I can't test featuresCount because it's not a number (wc -l seem
>> to return a string with two values..) How can I make such test in
>> karaf/gogo ?
>>
>> Regards,
>> Jérémie
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



-- 

Guillaume Nodet


Re: Custom file install in Karaf assembly

2017-09-07 Thread Guillaume Nodet
We have an example in our build.
Here's the framework generation:

https://github.com/apache/karaf/blob/master/assemblies/features/static/pom.xml
It's actually used here:
  https://github.com/apache/karaf/blob/master/demos/profiles/static/pom.xml

A completely different approach may be to blacklist the fileinstall bundle.
Add the following in your plugin config:


   mvn:org.apache.felix/org.apache.felix.fileinstall/3.6.0




2017-09-05 12:19 GMT+02:00 Matteo Rulli :

> Hello,
> I'm trying to replace the default Felix fileinstall with my custom
> implementation.
>
> To do that I built an alternative framework feature and I generated a KAR
> out of it.
>
> After that I replaced the
>
> 
> org.apache.karaf.features
> framework
> kar
> 
>
> dependency in my karaf assembly project with the custom KAR (this is
> exacltly the same as the original one except that it contains my custom
> fileinstall). Unfortunately when I try to generate the assembly I get this
> stacktrace:
>
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute
> goal org.apache.karaf.tooling:karaf-maven-plugin:4.1.2:assembly
> (default-assembly) on project flairkit.assembly: Unable to build assembly
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:213)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:154)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:146)
> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.
> buildProject(LifecycleModuleBuilder.java:117)
> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.
> buildProject(LifecycleModuleBuilder.java:81)
> at org.apache.maven.lifecycle.internal.builder.singlethreaded.
> SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.
> execute(LifecycleStarter.java:128)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:993)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:345)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:191)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.codehaus.plexus.classworlds.launcher.Launcher.
> launchEnhanced(Launcher.java:289)
> at org.codehaus.plexus.classworlds.launcher.Launcher.
> launch(Launcher.java:229)
> at org.codehaus.plexus.classworlds.launcher.Launcher.
> mainWithExitCode(Launcher.java:415)
> at org.codehaus.plexus.classworlds.launcher.Launcher.
> main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: Unable to
> build assembly
> at org.apache.karaf.tooling.AssemblyMojo.execute(
> AssemblyMojo.java:268)
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(
> DefaultBuildPluginManager.java:134)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:208)
> ... 20 more
> Caused by: java.lang.NullPointerException
> at org.apache.karaf.tooling.AssemblyMojo.doExecute(
> AssemblyMojo.java:463)
> at org.apache.karaf.tooling.AssemblyMojo.execute(
> AssemblyMojo.java:262)
> ... 22 more
>
> Looking at the AssemblyMojo code, it seams that this is not the right way
> to achieve what I want to do. Could you suggest the right way to replace
> fileinstall with a custom implementation in my custom karaf (karaf v.
> 4.1.2) assembly?
>
> Thank you very much,
> Matteo
>
>


-- 

Guillaume Nodet


Re: OSGi Transaction control fails with hibernate

2017-09-13 Thread Guillaume Nodet
Fwiw, you should ask on the Aries mailing list, where tx-control is
developed.

I've recently worked on a new project called pax-transx which provides an
abstraction layer on top of transaction managers so that some features can
be accessed in a common way.  I think this should be used in tx-control
instead of wrapping the tm again and not being flexible.
Right now, tx-control uses its own instance of transaction manager and
there's no way around afaik, so you can't use the karaf transaction feature
if you want to use it.
Anyway, I'd gladly support you if you go to the aries mailing list to raise
this point !

2017-09-13 9:52 GMT+02:00 :

> Hello.
>
> I'm trying to get tx-control with XA transactions running (local is
> working).
> I found that tx-control opens a JTA transaction using
> RecoveryWorkAroundTransactionManager (derived from geronimo's
> TransactionManager Implementation) explicitly instead of using the
> registered TransactionManager (aries in my case for karaf 4.0.9). When
> hibernate EntityManager implementation tries to join the transaction it
> fails because it uses the TransactionManager provided by OsgiJtaPlatform
> (from hibernate-osgi) which is of course the one registered in osgi
> ecosystem.
>
> I think that the tx-control implementation has to use the
> TransactionManager registered with OSGi.
>
> Has anyone got that thing ever running?
>
> Best Alexander.
>



-- 

Guillaume Nodet


Re: Upgrading karaf 4.0.5 to 4.0.9 "feature:stop" not working

2017-09-13 Thread Guillaume Nodet
THis is clearly a bug.  Could you please raise a JIRA issue ?

2017-09-13 11:41 GMT+02:00 Saishrinivas Polishetti :

> Hi
>
> Initially we were using karaf 4.0.5 now are planning to move to 4.0.9.
> feature:stop used to make feature and its respective bundles in resolved
> state.
>
> feature:stop used to work in 4.0.5 but in 4.0.9 the feature getting
> resolved but the bundles are in active state.
> Is these is bug in 4.0.9 or any changes to be done in feature description.
>
> Can anyone help ?
>
>
> Regards
> Sai
>



-- 

Guillaume Nodet


Re: Antw: Re: OSGi Transaction control fails with hibernate

2017-09-14 Thread Guillaume Nodet
t org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> [119:org.eclipse.jetty.io:9.3.14.v20161028]
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> [119:org.eclipse.jetty.io:9.3.14.v20161028]
> at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> executeProduceConsume(ExecuteProduceConsume.java:303)
> [130:org.eclipse.jetty.util:9.3.14.v20161028]
> at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
> produceConsume(ExecuteProduceConsume.java:148)
> [130:org.eclipse.jetty.util:9.3.14.v20161028]
> at org.eclipse.jetty.util.thread.strategy.
> ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
> [130:org.eclipse.jetty.util:9.3.14.v20161028]
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
> [130:org.eclipse.jetty.util:9.3.14.v20161028]
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
> [130:org.eclipse.jetty.util:9.3.14.v20161028]
> at java.lang.Thread.run(Thread.java:745) [?:?]
> Caused by: java.lang.NullPointerException
> at org.apache.aries.tx.control.jpa.xa.impl.
> JPAEntityManagerProviderFactoryImpl$EnlistingDataSource.
> lambda$getConnection$4(JPAEntityManagerProviderFactoryImpl.java:193)
> ~[?:?]
> at org.apache.aries.tx.control.jpa.xa.impl.
> JPAEntityManagerProviderFactoryImpl$EnlistingDataSource.
> enlistedConnection(JPAEntityManagerProviderFactoryImpl.java:230) ~[?:?]
> ... 78 more
>
> Best, Alexander
>
>
> >>>
> Hi Alexander,
>
> So what you’re doing is passing a *fully configured* EntityManagerFactory
> to the resource provider factory. If you create the provider this way then
> you are responsible for setting up *all* of the EntityManagerFactory’s
> configuration, including how it’s going to integrate with transaction
> control. For local transactions there is nothing to integrate with , but in
> the general case this is actually quite hard to do, and I would advise not
> trying to do it.
>
> As you can see the EntityManagerFactory version of the provider factory
> <https://github.com/apache/aries/blob/ed8dbc79758766081203056cff27eb0bcbd7efb3/tx-control/tx-control-providers/jpa/tx-control-provider-jpa-xa/src/main/java/org/apache/aries/tx/control/jpa/xa/impl/JPAEntityManagerProviderFactoryImpl.java#L122>
>  does
> quite a bit less setup on your behalf than the configuration-driven
> version does
> <https://github.com/apache/aries/blob/ed8dbc79758766081203056cff27eb0bcbd7efb3/tx-control/tx-control-providers/jpa/tx-control-provider-jpa-xa/src/main/java/org/apache/aries/tx/control/jpa/xa/impl/XAJPAEMFLocator.java#L72>.
> If you were to
> provide a factory configuration for the “org.apache.aries.tx.control.jpa.xa”
> pid containing “osgi.unit.name=” and any
> necessary datasource configuration (i.e. that’s not coming from the
> persistence xml) then you could inject the JPAEntityManagerProvider
> directly as a service.
>
> More documentation about configuration-driven resources for Aries
> Tx-Control is available at http://aries.apache.org/
> modules/tx-control/xaJPA.html#creating-a-resource-using-a-
> factory-configuration
>
> Another thing that probably could be done would be to look at dynamically
> installing the plugin when using the EntityManagerFactoryBuilder version of
> the factory method. This, however, would need a patch to Aries Transaction
> Control, and would still not make your existing code work.
>
> Regards,
>
> Tim
>
>
> On 13 Sep 2017, at 10:59,  <
> alexander.sah...@brodos.de> wrote:
>
> Hi Tim,
>
> I use a JPAEntityManagerProviderFactory (providerFactory) which I inject
> as a service reference into my repository class.
> Furthermore, I inject a EntityManagerFactory (emf) into the repository
> class as well as the TransactionControl (txControl).
>
> The provider Factory is created by pax-jdbc (I use hibernate).
>
> This provider factory is then used to get the Entity manager like this:
>
> EntityManager em = providerFactory.getProviderFor(emf,
> null).getResource(txControl);
>
> It fails giving an exception telling that transaction cannot be joined,
> because it's not open.
>
> The wrapping call is like this:
> txControl.build()
> .required(
> () -> repo.store(article));
>
> Best, Alexander.
>
>
> >>>
> Hi Alexander,
>
> Do you have a code example of how you’re obtaining and using the
> EntityManager? There should be no usage of the OSGiJtaPlatform from the
> tx-control XA JPA resource provider, which means that there’s either a bug
> in the resource provider, or something is misconfi

Re: Antw: Re: OSGi Transaction control fails with hibernate

2017-09-14 Thread Guillaume Nodet
2017-09-14 14:17 GMT+02:00 Timothy Ward :

> Note that this work should be done as part of a new transaction control
> service implementation (there’s some common code which should help to speed
> up implementing it), not as changes to the current implementation, which is
> undergoing stabilisation as the Reference Implementation of the OSGi
> Transaction control service.
>

You mean a new module like the tx-control-service-xa inside
  https://github.com/apache/aries/tree/trunk/tx-control/tx-control-services
Right ?

>
> Also this update still won’t avoid the need for the JPA resource provider
> to have a custom plugin for transaction integration. The whole point of a
> managed resource is that it integrates with the Transaction Control service
> that gets passed to it, not by integrating with a third party service which
> may, or may not, be involved.
>
> Alexander - is there any chance of seeing the proof of concept code? It
> seems as though it’s pretty close to working with the existing bundles.
>
> Regards,
>
> Tim
>
>
> On 14 Sep 2017, at 12:42,  <
> alexander.sah...@brodos.de> wrote:
>
> I'll give it a try. Maybe with a little guidance of you guys. First of all
> I'll try to inject a JTA TransactionManager into tx-control instead of the
> internal one. If that is working, I'll let you know.
>
>
> >>>
>
> On 14 Sep 2017, at 10:46, Guillaume Nodet  wrote:
>
>
>
> 2017-09-14 11:40 GMT+02:00 Timothy Ward :
>
>> Hi Alexander,
>>
>> As has been discussed on the Aries lists before, I have no problem with
>> someone creating a separate implementation of the Transaction Control
>> service which leverages the OSGi JTA Service Specification. The reason that
>> the current implementation doesn’t do this is twofold:
>>
>>
>>- By embedding a transaction manager the current Tx Control
>>implementation can avoid the javax.transaction split package from the JVM.
>>This makes the implementation easier to use and deploy because the user
>>doesn’t need to mess around with the boot class path, or worry about what
>>JTA version is available
>>- By embedding a transaction manager the current Tx control
>>implementation can rely on specific behaviours of the transaction manager
>>that it uses. This means that the Tx control implementation can support 
>> the
>>last resource gambit and XA recovery.
>>
>> Fwiw, as I already indicated, the pax-transx project provides a layer
> solving those problems, in addition of providing additional features and
> pluggability.
>
> Would you be interested to incorporate it in Tx Control ?
>
>
> This is not something that I have the time to do, but another
> implementation of a transaction control service with a pluggable
> transaction manager would be a great addition.
>
>
> Guillaume
>
>
>>
>> If this is a proof of concept project then are you able to share it
>> somewhere (e.g. GitHub)? I’d like to help you get to the bottom of the NPE
>> that you’re seeing as I don’t think it should be possible for that to be
>> happening!
>>
>> Finally - yes the Aries user list is the best place to talk about this,
>> but I don’t want to move the conversation myself as I don’t know whether
>> you’re registered for that list, and don’t want you to miss my replies.
>>
>> Regards,
>>
>> Tim
>>
>>
>> On 14 Sep 2017, at 07:53,  <
>> alexander.sah...@brodos.de> wrote:
>>
>> Hi Tim.
>>
>> I'm using the 2.6.1 version of aries jpa support already. Normal
>> transaction control with blueprint and @Transactional annotation was
>> working fine.
>>
>> To have better control over startup dependencies and cope with
>> disappearing and appearing services during runtime we invest some time in
>> a Proof-Of-Concept for switching over to declarative services (DS).
>> Everything works fine so far - even restful services for DS with cxf-dosgi
>> works fine. Last bit to get it working is transaction management. With DS,
>> the @Transactional annotation is not working anymore due to the lack of
>> interceptors with DS.
>>
>> What do you think of the idea that tx-control should pick up a JTS
>> Transaction manager from the service registry instead of creating an own
>> one with new operator which is in fact tightly coupled. To implement loose
>> coupling here we should add a factory that may be configurable in the
>> factory config file.
>>
>> BTW, should we switch the discussion to aries group still?
>>
>> Best, Alexander.
>>
&

Re: Help interpreting error in "feature:install"

2017-09-28 Thread Guillaume Nodet
When I see
file:C:/Users//frameworks/apache-karaf-3.0.1/data/kar
@multihttp:///nexus/content/groups/digitalexp/http://<
nexushostport>/nexus/content/groups/digitalexp/
that makes me think you have some missing commas in your maven
configuration.
Could you check your etc/org.ops4j.pax.url.mvn.cfg file ?

2017-09-28 21:15 GMT+02:00 KARR, DAVID :

> After I determined that I have to run karaf 3.0.1 with Java 7, I'm now
> blocked trying to install a feature.  I'm having trouble fully
> understanding exactly what is wrong.
>
> I added the following lines to "bin/setenv.bat":
> ---
> set KARAF_OPTS=-Dhttp.useProxy=true -Dhttp.proxyHost=
> -Dhttp.proxyPort=8080
> set JAVA_MAX_MEM=3572M
> 
>
> After starting karaf, the instructions I have say to enter the following
> approximate commands:
> ---
> config:property-append -p  org.ops4j.pax.url.mvn 
> org.ops4j.pax.url.mvn.repositories
> http:///nexus/content/groups//
> feature:repo-add mvn://1.4.1-SNAPSHOT/xml/features
> feature:install -v -c -service
> -
>
> I've elided some of this with "<>" placeholders.
>
> When I run this, the last command fails with this:
> --
> Error executing command: Error resolving artifact
> :common-features:xml:features:1.1.0-SNAPSHOT: Could not transfer
> artifact :common-features:xml:features:1.1.0-SNAPSHOT from/to
> kar.repository (file:C:/Users//frameworks/apache-karaf-3.0.1/
> data/kar@multihttp:///nexus/content/
> groups/digitalexp/http:///nexus/content/groups/digitalexp/):
> Repository path C:\Users\\frameworks\apache-karaf-3.0.1\
> data\kar@multihttp:\\nexus\content\
> groups\\http:\\nexus\content\groups\does
> not exist, and cannot be created.
> --
>
> Now, I note that when I run my local build, it builds version
> "1.1.20-SNAPSHOT of the "common-features" artifact.  I looked on our nexus
> host, and the group does exist, and that artifact and version
> (1.1.0-SNAPSHOT) does appear to be present.
>
> I'm trying to understand at least what this error is actually complaining
> about, and perhaps that will lead to a solution.
>
>


-- 

Guillaume Nodet


Re: Providing alternative config mechanism than felix.fileinstall/Preserving config changes on re-install

2017-10-06 Thread Guillaume Nodet
You could also look at the read-only implementation of ConfigAdmin we have
in Karaf.
That can easily be used to remove fileinstall completely, as done in the
static configurations.

https://github.com/apache/karaf/tree/master/services/staticcm/src/main/java/org/apache/karaf/services/staticcm

2017-10-06 13:39 GMT+02:00 Jean-Baptiste Onofré :

> Hi Tom,
>
> You can implement your own PersistenceManager (ConfigAdmin service).
>
> Regards
> JB
>
>
> On 10/06/2017 01:07 PM, t...@quarendon.net wrote:
>
>> I can see KARAF-418, but that's pretty old, and sounds like it was
>> considered unnecessary? Is there anything else I can't find?
>>
>> I don't necessarily want to store things in a database, I just want
>> different behaviour to normal, to provide my own implementation of
>> something that listens to config changes and injects configuration on
>> startup. And I can write that bit, what I can't do is substitute it in at a
>> central enough level to replace fileinstall.
>>
>> I've made a little progress. I manually edited the "startup.properties"
>> file and put my bundle in there at level 11. It got activated. So what I
>> don't currently understand is a) where that file comes from (it's clearly
>> generated as part of building my karaf distribution, it's not in source
>> control) and b) what specifying the start-level in the feature.xml file
>> does (since it doesn't appear to specify the start level :-)).
>> My problem now appears to be that I'd written my code using declarative
>> services, and I think I need to go back to old fashioned bundle activators
>> and service trackers in order to reduce the dependencies and make the code
>> work in the "simple" environment I encounter down at that start level.
>>
>> There was also a comprehension question of why the ConfigRepository was
>> attempting to write the config files directly, rather than just calling
>> Configuration.update. Surely one thing or the other (calling update I
>> assume is preferable), but not both?
>>
>> Thanks.
>>
>> On 06 October 2017 at 11:40 Jean-Baptiste Onofré  wrote:
>>>
>>>
>>> Hi
>>>
>>> I guess you want to use an alternative backend to the filesystem (a
>>> database for instance).
>>>
>>> In that case we have a Jira about that and you can provide your own
>>> persistence backend.
>>>
>>> Regards
>>> JB
>>>
>>> On Oct 6, 2017, 12:30, at 12:30, t...@quarendon.net wrote:
>>>
>>>> I'm trying to establish some alternative configuration behaviour than
>>>> what felix-fileinstall gives me.
>>>> I have written a very simple component that reads configuration files
>>>> in from /etc and updates config admin with the information, much like
>>>> fileinstall does. I can run this and it appears to work, however I
>>>> still have the existing mechanism in that I'd like to remove.
>>>>
>>>> So I naively did the following:
>>>>set the start-level of my bundle to be 11, same as fileinstall
>>>> set felix.fileinstall.enableConfigSave to false in
>>>> etc/custom.properties
>>>>set felix.fileinstall.dir to empty
>>>>
>>>> Karaf fails to start.
>>>>
>>>> So my suspicion is that apache fileinstall is more centrally required
>>>> than I'd hoped. Looking at the karaf code there are certainly a few
>>>> places where it assumes a configuration contains a
>>>> felix.fileinstall.filename property that names the file where the
>>>> configuration is stored, and seems to directly read and write those
>>>> files. This appears to mean that I wouldn't be able to substitute my
>>>> own configuration storage backend, which is a shame (I'm actually
>>>> confused what org.apache.karaf.config.core.ConfigRepository is actually
>>>> doing here -- why does is write directly to the file, rather than just
>>>> letting fileinstall do it, especially as it only seems to allow for
>>>> ".cfg" and not ".config" files). There may be other reasons why karaf
>>>> won't start though.
>>>>
>>>> Is it likely that I would substitute felix.fileinstall in this way?
>>>>
>>>>
>>>> What I was actually trying to solve was what to do when a user
>>>> uninstalls and reinstalls our karaf-based product, and attempting to
>>>> preserve any configuration changes. What I had hoped to do was store
>>>> any actually modified configuration properties in separate files (just
>>>> the actual properties that were different from default or from the
>>>> originals in the etc/*.cfg files), so that the original etc/*.cfg files
>>>> would be replaced without difficulty, and the changed configuration
>>>> changes would then be applied.
>>>>
>>>> So alternative question: How else can I achieve the same thing without
>>>> making the users manually merge the configuration changes?
>>>>
>>>> Thanks.
>>>>
>>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



-- 

Guillaume Nodet


Re: Strange resolution problem

2017-10-09 Thread Guillaume Nodet
It was until a few months ago:

https://github.com/apache/felix/commit/b71956b2e880d64780708506f048eeb0e48657d0#diff-d05f486393d3040f98547983cf7d6452

So maybe your maven dependencies point to an old artifact ?

2017-10-09 8:42 GMT+02:00 David Leangen :

>
> Hi!
>
> I am stumped. I am having a resolution issue in Karaf as follows:
>
>
> org.osgi.framework.BundleException: Unable to resolve
> net.leangen.expedition.platform.ddd.diag [101](R 101.0): missing
> requirement [net.leangen.expedition.platform.ddd.diag [101](R 101.0)]
> osgi.wiring.package; (&(osgi.wiring.package=*org.osgi.service.serializer*
> )(version>=1.0.0)(!(version>=2.0.0))) Unresolved requirements:
> [[net.leangen.expedition.platform.ddd.diag [101](R 101.0)]
> osgi.wiring.package; (&(osgi.wiring.package=*org.osgi.service.serializer*
> )(version>=1.0.0)(!(version>=2.0.0)))]
>
>
> The problem is that there **is no package** org.osgi.service.serializer.
> There is only org.apache.felix.serializer.
>
> I have gone through all my OBRs and repositories, but cannot find any
> reference to that package. I even did a search in the Karaf code and did
> not find any such reference. Since the package does not exist, the
> resolution error is correct. The problem is: why is there a requirement on
> that package in the first place??
>
>
> What am I missing??
>
>
> Thanks!
> =David
>
>
>


-- 

Guillaume Nodet


Re: Strange resolution problem

2017-10-09 Thread Guillaume Nodet
Given it happens at runtime, then it means the import is already in your
net.leangen.expedition.platform.ddd.diag bundle, so it's just the
consequence of a problem happening at build time.  And this means, either a
class import the package, or there is an explicit instruction to import it
either in the bundle plugin config or in a bnd file.  Last possibility is
that the bundle being built embeds code from a different jar, and that code
has a class which import the package.
As for blacklisting, you can urls in the etc/blacklisted.properties file,
but it seems the problem comes from your own net.leangen.expedition.
platform.ddd.diag bundle...

2017-10-10 0:03 GMT+02:00 David Leangen :

>
>
> > On Oct 9, 2017, at 4:23 PM, Achim Nierbeck 
> wrote:
>
> > On Oct 9, 2017, at 4:28 PM, Guillaume Nodet  wrote:
>
> Thanks for the suggestions. You are no doubt right, so I quadruple checked
> my code, and bumped all versions. Ran the build locally, and confirmed that
> the old package is not being pulled in… but it still gets pulled in during
> resolution in Karaf.
>
> Two questions:
>
> 1. Is there a mechanism to trace back the results of the resolution so I
> can try to pinpoint where the bad package is being pulled in?
>
> 2. Is it possible to blacklist in Karaf, so I can avoid a particular
> version of a bundle or package? (I did not see anything in the docs.)
>
>
> Thanks!
> =David
>
>
>


-- 

Guillaume Nodet


Re: What kind of things would prevent a set of bundles from going Active?

2017-10-10 Thread Guillaume Nodet
; > > > expression?).
> > > >
> > > >  > On 09/29/2017 07:30 PM, KARR, DAVID wrote:
> > > >  > > I'm still working with the legacy app using Karaf 3.0.1,
> > > which I don't
> > > >  > have very good overall documentation for.
> > > >  > >
> > > >  > > I've been able to execute my "feature:install" command in
> > > > the
> > > karaf
> > > >  > console, which appeared to complete successfully, but at that
> > > point it's
> > > >  > apparently expected that all of my bundles are in an "Active"
> > > state.
> > > >  > However, for some reason most of them are not.  Some are, but
> > > some of
> > > >  > the application-specific bundles are "Installed", or even
> > > "Grace
> > > >  > Period".
> > > >  > >
> > > >  > > I've checked the karaf.log, and there are no obvious red
> > > flags.
> > > >  > >
> > > >  > > When I try to hit my REST service at localhost:8181, it
> > > > just
> > > times
> > > >  > out, which is not surprising, as the bundle in question
> > > probably is not
> > > >  > active.
> > > >  > >
> > > >  > > I also tried installing the web console.  I just did
> > > "feature:install
> > > >  > webconsole" and then went to
> > > "http://localhost:8181/system/console"; in
> > > >  > my browser.  This timed out.
> > > >  > >
> > > >  > > What should I be looking at to diagnose this?
> > > >  > >
> > > >  >
> > > >  > --
> > > >  > Jean-Baptiste Onofré
> > > >  > jbono...@apache.org <mailto:jbono...@apache.org>
> > > >  > https://urldefense.proofpoint.com/v2/url?u=http-
> > > >  > 3A__blog.nanthrax.net
> > > >     <https://urldefense.proofpoint.com/v2/url?u=http-3A__3A-5F-
> > > 5Fblog.nanthrax.net&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=OsTemSXEn-
> > > xy2uk0vYF_EA&m=ywsgJ_pZLXX8vzZNai1vxoxc946N5Ls_M8h0G5a50rU&s=183dl-
> > > n0jyIayv3W4Sa0ZmQAds0rULtG_tfaAhBD9T0&e=>&d=DwIDaQ&c=LFYZ-
> > > o9_HUMeMTSQicvjIg&r=OsTemSXEn-
> > > >  >
> > > xy2uk0vYF_EA&m=ZMfiZcSDNceMx7Qo65Vgub5g4k_Jmwo5hPTCY33LQXA&s=jl9mLMBBm
> > > RS
> > > >  > FeUETzUN7l8dHAQbh5CGPlgZd6fqUSJI&e=
> > > >  > Talend - https://urldefense.proofpoint.com/v2/url?u=http-
> > > >  > 3A__www.talend.com
> > > > <https://urldefense.proofpoint.com/v2/url?u=http-3A__3A-5F-
> > > 5Fwww.talend.com&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=OsTemSXEn-
> > > xy2uk0vYF_EA&m=ywsgJ_pZLXX8vzZNai1vxoxc946N5Ls_M8h0G5a50rU&s=dFH73q3dy
> > > _A
> > > HWMrBmRmvPfa05oD5w6zCEzeYtClLSNw&e=>&d=DwIDaQ&c=LFYZ-
> > > o9_HUMeMTSQicvjIg&r=OsTemSXEn-
> > > >  >
> > > xy2uk0vYF_EA&m=ZMfiZcSDNceMx7Qo65Vgub5g4k_Jmwo5hPTCY33LQXA&s=ZcPGU_vMw
> > > hY
> > > >  > t2Zoc_2TdHZKrZ1Z-wyM2owPWlY6nFM0&e=
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > > --
> > > > Christian Schneider
> > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.liquid-2Drea
> > > > li
> > > > ty.de&d=DwIDaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=OsTemSXEn-xy2uk0vYF_EA&m=u
> > > > vs
> > > > yQNSH95-x80guhhYZXWlX1lqZKZxOy62d-pLfANc&s=oqch2t-t9p3zdAX1JFbMog5KX
> > > > EC
> > > > 434XLv2C6D35h_qQ&e=
> > > > <https://urldefense.proofpoint.com/v2/url?u=https-3A__owa.talend.com
> > > > _o
> > > > wa_redir.aspx-3FC-3D3aa4083e0c744ae1ba52bd062c5a7e46-26URL-3Dhttp-25
> > > > 3a
> > > > -252f-252fwww.liquid-2Dreality.de&d=DwMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&
> > > > r=
> > > > OsTemSXEn-xy2uk0vYF_EA&m=ywsgJ_pZLXX8vzZNai1vxoxc946N5Ls_M8h0G5a50rU
> > > > &s =XA1g_edbuF0uLDolXaY7sLvXsAufVqxXS4pXHBhIPX0&e=>
> > > >
> > > > Computer Scientist
> > > >
> > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.adobe.com&d=
> > > > Dw
> > > > IDaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=OsTemSXEn-xy2uk0vYF_EA&m=uvsyQNSH95-
> > > > x8
> > > > 0guhhYZXWlX1lqZKZxOy62d-pLfANc&s=2lBE-kof-4ZKEx4yMWxOctGGW5ytCGq9EDg
> > > > yf
> > > > Osbzeg&e=
> > > > <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.adobe.com&d
> > > > =D
> > > > wMFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=OsTemSXEn-xy2uk0vYF_EA&m=ywsgJ_pZLX
> > > > X8
> > > > vzZNai1vxoxc946N5Ls_M8h0G5a50rU&s=j5d5pJJFEcyJY7GSdGav9yUx9tOTMdV2YM
> > > > Ti
> > > > 26h1J7o&e=>
> > > >
> > >
> > > --
> > > Jean-Baptiste Onofré
> > > jbono...@apache.org
> > > https://urldefense.proofpoint.com/v2/url?u=http-
> > > 3A__blog.nanthrax.net&d=DwIDaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=OsTemSXEn-
> > > xy2uk0vYF_EA&m=uvsyQNSH95-x80guhhYZXWlX1lqZKZxOy62d-
> > > pLfANc&s=VW3bA1xavrTnr0Ca6JoFfDab1JAUaNDXjdA8tHfq5ms&e=
> > > Talend - https://urldefense.proofpoint.com/v2/url?u=http-
> > > 3A__www.talend.com&d=DwIDaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=OsTemSXEn-
> > > xy2uk0vYF_EA&m=uvsyQNSH95-x80guhhYZXWlX1lqZKZxOy62d-pLfANc&s=zxB-
> > > 9Zxqn8S_iAjr73tz2dLwyAMbyqzYIDYyPoj-HgQ&e=
>



-- 

Guillaume Nodet


Re: [Karaf 4.1.2] - error with command history + grep

2017-10-12 Thread Guillaume Nodet
I've seen it too fwiw (not sure if it was on master or not), but I haven't
investigated.

2017-10-12 20:45 GMT+02:00 Jean-Baptiste Onofré :

> Hi
>
> It sounds like a bug. Let me try to reproduce.
>
> Regards
> JB
> On Oct 12, 2017, at 19:43, francois papon 
> wrote:
>>
>> Hi,
>>
>> When I use the commande history with grep, I've got this error :
>>
>> karaf@root()> history | grep cave
>> grep: java.lang.ArrayIndexOutOfBoundsException
>>
>> I have some entry of "cave" word in my history.
>>
>> in the log :
>>
>> 2017-10-08 18:28:45,969 | DEBUG | nsole user karaf |
>> LoggingCommandSessionListener| 42 - org.apache.karaf.shell.core - 4.1.2
>> | Executing command: 'history | grep cave'
>> 2017-10-08 18:28:47,000 | DEBUG | nsole user karaf |
>> LoggingCommandSessionListener| 42 - org.apache.karaf.shell.core - 4.1.2
>> | Command: 'history | grep cave' returned 'null'
>>
>> I try with other word, I've got the same error :
>>
>> karaf@root()> history | grep list
>>   852  service:list ConnectionFactory
>>   854  service:list ConnectionFactory
>> grep: java.lang.ArrayIndexOutOfBoundsException
>>
>> In the log :
>>
>> 2017-10-08 18:33:12,334 | DEBUG | nsole user karaf |
>> LoggingCommandSessionListener| 42 - org.apache.karaf.shell.core - 4.1.2
>> | Executing command: 'history | grep list'
>> 2017-10-08 18:33:13,372 | DEBUG | nsole user karaf |
>> LoggingCommandSessionListener| 42 - org.apache.karaf.shell.core - 4.1.2
>> | Command: 'history | grep list' returned 'null'
>>
>>
>> Karaf - info :
>>
>> Karaf
>>   Karaf version   4.1.2
>>   OSGi Framework  org.apache.felix.framework-5.6.6
>>
>>
>> Francois
>>
>


-- 

Guillaume Nodet


Re: javax.print.PrintServiceLookup under Karaf

2017-10-13 Thread Guillaume Nodet
It looks like the code is using:
  ServiceLoader.load(PrintServiceLookup.class)
If those services are provided by the JRE, then you need to make sure that
the first time the service list is looked up, the thread context class
loader is set to null, so that it will use the system classloader to load
those services.
That's really a bad idea to initialize a static variable with something
dependent on the context, but there's not much that can be done about it.
So whenever you use lookupPrintServices make sure you do it with a block
like:

public static final PrintService[] lookupPrintServices(DocFlavor flavor,
AttributeSet attributes) {
ClassLoader tccl = Thread.currentThread().getContextClassLoader();
  try {
  Thread.currentThread().setContextClassLoader(null);
  return PrintServiceLookup.lookupPrintServices(flavor, attributes);
  } finally {
  Thread.currentThread().setContextClassLoader(tccl);
  }
}

Guillaume

2017-10-13 20:07 GMT+02:00 Ygor Castor :

> Hello! I'm having a problem with PrintServiceLookup under Karaf, it seems
> that when i run a PrintServiceLookup.lookupPrintServices(null, null) it
> always return empty, after some investigation i found that this method
> searchs for a javax.print.PrintServiceLookup under /META-INF/services , the
> file contains the following:
>
> # Provider for Java Print Service
> sun.print.Win32PrintServiceLookup
>
> It seems that it looks to the running JRE to define which printService to
> use, so i created the required folder in my bundle and copied the
> javax.print.PrintServiceLookup file to it, with that it worked. But i can't
> rely on that in a production server, since the application can be runned on
> linux or windows.
>
> How can i fix that?
>



-- 

Guillaume Nodet


Re: How to put an external file on the classpath so code in karaf bundle can read it?

2017-10-17 Thread Guillaume Nodet
One way to achieve that is to put the config file in a fragment and attach
it to your bundle. It will be able to access those without any problem then.

2017-10-17 22:34 GMT+02:00 KARR, DAVID :

> I'm working on a project using Karaf 3.0.1 (can't upgrade).
>
> A colleague has a situation where he's using an artifact that tries to
> load a properties file from the classpath using "getResource()", as opposed
> to "getResourceAsStream()".  We can't change this.
>
> He's tried to place these properties file on disk and adding the path to
> that directory to the classpath when starting karaf.  When he steps through
> the code, it doesn't find those files.  I didn't watch his debugging steps,
> but he said that it appears to only use the bundle classloader, so
> augmenting the classpath on the karaf command line makes no difference.
>
> I know about the ability to place ".cfg" files into karaf/etc and define
> persistent properties in a blueprint, but that doesn't help, because this
> artifact expects to find these properties files on the classpath.
>
> What else can we do to make these files findable by "getResource()" from
> the code in the artifact?
>



-- 

Guillaume Nodet


Re: How to put an external file on the classpath so code in karaf bundle can read it?

2017-10-17 Thread Guillaume Nodet
THere's no difference between getResource() and getResourceAsStream() in
the way the resource is obtained.
At the end, a call to getResourceAsStream(name) is the same as
getResource(name).openStream().

What you need is a way to put your resource in the bundle classloader which
can be done using a fragment.

2017-10-17 23:18 GMT+02:00 KARR, DAVID :

> Won’t that still be deployed as a jar file?  Doesn’t that mean that
> “getResource()” (as opposed to “getResourceAsStream”) will fail?
>
>
>
> *From:* Guillaume Nodet [mailto:gno...@apache.org]
> *Sent:* Tuesday, October 17, 2017 1:52 PM
> *To:* user 
> *Subject:* Re: How to put an external file on the classpath so code in
> karaf bundle can read it?
>
>
>
> One way to achieve that is to put the config file in a fragment and attach
> it to your bundle. It will be able to access those without any problem then.
>
>
>
> 2017-10-17 22:34 GMT+02:00 KARR, DAVID :
>
> I'm working on a project using Karaf 3.0.1 (can't upgrade).
>
> A colleague has a situation where he's using an artifact that tries to
> load a properties file from the classpath using "getResource()", as opposed
> to "getResourceAsStream()".  We can't change this.
>
> He's tried to place these properties file on disk and adding the path to
> that directory to the classpath when starting karaf.  When he steps through
> the code, it doesn't find those files.  I didn't watch his debugging steps,
> but he said that it appears to only use the bundle classloader, so
> augmenting the classpath on the karaf command line makes no difference.
>
> I know about the ability to place ".cfg" files into karaf/etc and define
> persistent properties in a blueprint, but that doesn't help, because this
> artifact expects to find these properties files on the classpath.
>
> What else can we do to make these files findable by "getResource()" from
> the code in the artifact?
>
>
>
>
>
> --
>
> 
> Guillaume Nodet
>
>
>



-- 

Guillaume Nodet


Re: Avoid unwanted refresh of bundles wired to activemq or cxf features

2017-10-19 Thread Guillaume Nodet
The bundle refresh are cascading, so in order to find the cause, you need
to trace back the tree.
In your example you have
  *activemq-karaf/5.14.1 (Wired to org.apache.activemq.activemq-osgi/5.14.1
which is being refreshed)*
So you need to find why the *activemq-osgi* bundle is refreshed, and so on.
Usually, you end up with an optional import which is not satisfied and that
one of the newly installed bundle can satisfy, so in order to get that
wired, the bundle needs to be refreshed.

The deploy folder does not allow tight control over the deployment, but if
you use the command line, you'll be able to avoid the refresh as
Jean-Baptiste hinted.

2017-10-19 11:13 GMT+02:00 Arnaud Geslin :

> Hello
>
> We build kar files with Talend studio (some Camel routes) that are
> completly independent. When deploying thoses files by copy in
> container/deploy of by "bundle:install file://..." in the console, it
> sometimes refreshes all the other bundles already installed and active.
> ("stopping","resolved" then "active" again). This is a bit ennoying on a
> production system.
>
> I've searched in the ML archive but did not find any discussion about
> precisely the same issue
>
> The log says :
>
> 
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Changes to perform:   Region: root Bundles to install:
> mvn:gfc.R_FRONTAL_HTTP/R_FRONTAL_HTTP/0.2
> mvn:org.apache.camel/camel-jetty-common/2.17.3
> mvn:org.apache.camel/camel-jetty9/2.17.3
> mvn:org.codehaus.woodstox/stax2-api/3.1.4 Installing bundles:
> mvn:gfc.R_FRONTAL_HTTP/R_FRONTAL_HTTP/0.2
> mvn:org.apache.camel/camel-jetty-common/2.17.3
> mvn:org.apache.camel/camel-jetty9/2.17.3
> mvn:org.codehaus.woodstox/stax2-api/3.1.4 Stopping bundles:
> gfc.R7_PU017B_BP_FROMKHEOPSTOSAP/0.1.0   gfc.R_BROKER_KHEOPS/0.1.0
> ... Refreshing bundles: activemq-karaf/5.14.1 (Wired to
> org.apache.activemq.activemq-osgi/5.14.1 which is being refreshed)
> gfc.R7_PU017B_BP_FROMKHEOPSTOSAP/0.1.0 (Wired to
> org.apache.activemq.activemq-osgi/5.14.1 which is being refreshed)
> gfc.R_BROKER_BUS/0.1.0 (Wired to org.apache.activemq.activemq-osgi/5.14.1
> which is being refreshed) gfc.R_BROKER_KHEOPS/0.1.0 (Wired to
> org.apache.activemq.activemq-osgi/5.14.1 which is being refreshed)*
> ---
>
> That's right all thoses bundles use activemq, but this feature is already
> installed and I don't understand why it should be refreshed when installing
> a new bundle, and then all the bundles wired to it.
> Same issue for bundles that use Apache cxf.
>
> I tried this workaround :
>
>
>
>
>
>
>
> *[root@kardev01 container]# cat
> etc/org.apache.karaf.features.cfg...featuresBoot=\(instance, \
> activemq-client, \activemq-camel, \activemq, \package, \*
>
> and also added in startup.properties :
> *mvn\:joda-time/joda-time/2.9.2 = 50*
>
> but it had no effect.
>
> I also tried an other workaround, unzipping the kar file, to remove in the
> feature-xx.xml file the dependencies to activemq (
> *activemq* and *activemq-camel*)
> then zip again the kar, and deploy it again but the bundles are still
> refreshed.
>
> I guess we don't really need to refresh the activemq or cxf features, how
> could we avoid this ?
>
> Thank you
> Loko
>



-- 

Guillaume Nodet


Re: Karaf git repo gone?

2017-10-23 Thread Guillaume Nodet
The official repository is now
  https://gitbox.apache.org/repos/asf?p=karaf.git
or
  https://github.com/apache/karaf

This is unfortunate that all the links are broken.  Maybe infra can setup a
redirect...

Guillaume

2017-10-24 0:26 GMT+02:00 Seth Leger :

> Hi everyone,
>
> The Karaf git repo at:
>
> https://git-wip-us.apache.org/repos/asf?p=karaf.git
>
> appears to have been removed. Was this intentional? It makes tracking
> down things in JIRA very difficult because all of the automated git repo
> links are now broken.
>
> Seth Leger
> The OpenNMS Group, Inc.
>



-- 
----
Guillaume Nodet


Re: Karaf Bundle stuck in “Starting”-status when trying to install feature programmatically

2017-11-06 Thread Guillaume Nodet
I'm not exactly sure what happens, especially, the fact that you're saying
there's no log is weird.
If you are programmatically calling the FeaturesService, can you make sure
all exceptions caught from those calls are logged too ?
Because I don't think the service will log an exception before throwing it,
so the missing log statement could be in your plugin manager.

I suspect several things that could go wrong.  One of them being the fact
that the FeaturesService needs the repositories to be accessible using
their URIs, so you can't point the FeaturesService to a file:xxx location
and then remove that file.  You need to first remove the repository from
the FeaturesService.

Guillaume

2017-11-06 8:51 GMT+01:00 Marius Dienel :

> Hey Guys,
>
>
> we're having a problem with Karaf Version > 4.0.4. We are currently using
> 4.0.9, but are thinking of upgrading to 4.0.10 soon. In our application we
> have a lot of features. One of them installs a bundle (PluginManager),
> which itself adds, installs and starts one or multiple features on startup
> (plugins for our application). Users are able to install the plugins via
> the GUI and the files (.jar) won't get deleted after building
> karaf/assembly.
>
> For each of those features a feature repository is created, if there is
> not already one from earlier startups. Everything works fine, if there
> already is a feature repository, but after a new build, when the feature
> repository does not exist anymore the installation of the features does not
> work, which leads to the bundle (PluginManager) being stuck in
> "GracePeriod"-status, for versions 4.0.5,4.0.6,4.0.7 or in
> "Starting"-status for versions 4.0.8,4.0.9,4.0.10. Everything works as
> expected with version 4.0.4.
>
> The workflow to install the features/plugins is following:
>
> FeatureService.addRepository(featureUri) --> 
> FeatureService.getRepository(symbolicName)
> --> Repository.getFeatures() --> FeatureService.installFeatures(featureSet,
> options)
>
> After stepping over installFeatures in debug-mode the debugger is stuck
> and I cannot continue. The biggest problem is that there is no error or
> exception in karaf.log. Karaf is still starting correctly and everything
> except the plugins part is usable.
>
> Using 4.1.X is not an option right now, since there are dependency
> problems when upgrading.
>
> Thank you in advance for your help.
>
> Greetings Marius​
>
>
>
> -
> doubleSlash gehört zu "Deutschlands besten Arbeitgebern 2017"
> <https://www.doubleslash.de/unternehmen/presse-uebersicht/pressemitteilungen/detail/dreifachauszeichnung-fuer-doubleslash-als-einer-der-besten-arbeitgeber/>
> <http://blog.doubleslash.de/>--
> 
> -
>
> *Marius Dienel *Auszubildender
> doubleSlash Net-Business GmbH
> Otto-Lilienthal-Str. 2
> <https://maps.google.com/?q=Otto-Lilienthal-Str.+2&entry=gmail&source=g>
> D-88046 Friedrichshafen
>
> Fon: +49 7541 / 70078-211 <+49%207541%2070078211>
> Fax: +49 7541 / 70078-111 <+49%207541%2070078111>
> marius.die...@doubleslash.de
> https://doubleSlash.de
> <http://doubleslash.de/>--
> -
>
> doubleSlash Net-Business GmbH
> Geschäftsführung: Konrad Krafft, Andreas Strobel
> Sitz, Registergericht: Friedrichshafen, Amtsgericht Ulm HRB 631718
> 
> ---
>



-- 

Guillaume Nodet


Re: version resolving

2017-11-06 Thread Guillaume Nodet
Those version resolution are done by pax-url-aether which uses the aether
library (the same one used by maven).
I think the behavior is correct, see

https://cwiki.apache.org/confluence/display/MAVENOLD/Dependency+Mediation+and+Conflict+Resolution

2017-11-06 12:06 GMT+01:00 Michal Hlavac :

> Hi,
>
> I would like to ask about version resolving in repository element of
> feature file.
> I am asking because of line 21 in https://github.com/apache/cxf/
> blob/cxf-3.2.1/osgi/karaf/features/src/main/resources/features.xml
>
> mvn:org.ops4j.pax.cdi/pax-cdi-features/[1.0.0.
> RC1,2)/xml/features
>
> When I download karaf 4.1.3 and start it, then execute command:
> feature:list | grep pax-cdi
> output show version 1.0.0.RC2
>
> But after feature:repo-add cxf 3.2.0, then re-run feature:list | grep
> pax-cdi
> it shows additional 1.0.0-SNAPSHOT
>
> I suspect that karaf cannot resolve RCx version sufix
>
> thanks, m.
>
>


-- 

Guillaume Nodet


Re: version resolving

2017-11-06 Thread Guillaume Nodet
This is not really a bug imho, but there's no way to specify better the
version range in maven.
In order to work around this behavior, you need to remove maven snapshot
repositories containing pax-cdi.  This way, it should not be resolved.

2017-11-06 13:53 GMT+01:00 Michal Hlavac :

> it means that there is bug in CXF, right?
>
>
>
> Because I can't build offline karaf distribution with cxf-3.2.0
>
> It searches for 1.0.0-SNAPSHOT on first start
>
>
>
> m.
>
>
> On pondelok, 6. novembra 2017 13:11:28 CET Guillaume Nodet wrote:
>
> Those version resolution are done by pax-url-aether which uses the aether
> library (the same one used by maven).
>
> I think the behavior is correct, see
>
>https://cwiki.apache.org/confluence/display/MAVENOLD/
> Dependency+Mediation+and+Conflict+Resolution
>
>
> 2017-11-06 12:06 GMT+01:00 Michal Hlavac :
>
> Hi,
>
> I would like to ask about version resolving in repository element of
> feature file.
> I am asking because of line 21 in https://github.com/apache/cxf/
> blob/cxf-3.2.1/osgi/karaf/features/src/main/resources/features.xml
>
> mvn:org.ops4j.pax.cdi/pax-cdi-features/[1.0.0.
> RC1,2)/xml/features
>
> When I download karaf 4.1.3 and start it, then execute command:
> feature:list | grep pax-cdi
> output show version 1.0.0.RC2
>
> But after feature:repo-add cxf 3.2.0, then re-run feature:list | grep
> pax-cdi
> it shows additional 1.0.0-SNAPSHOT
>
> I suspect that karaf cannot resolve RCx version sufix
>
> thanks, m.
>
>
>
>
> --
>
> 
> Guillaume Nodet
>
>
>
>


-- 

Guillaume Nodet


Re: version resolving

2017-11-06 Thread Guillaume Nodet
In particular, change the etc/org.ops4j.pax.url.mvn.cfg with
  org.ops4j.pax.url.mvn.repositories= \
http://repo1.maven.org/maven2@id=central

Also, remove your local ~/.m2/repository/org/ops4j/pax/cdi/pax-cdi-features

It should do the trick.

2017-11-06 14:12 GMT+01:00 Guillaume Nodet :

> This is not really a bug imho, but there's no way to specify better the
> version range in maven.
> In order to work around this behavior, you need to remove maven snapshot
> repositories containing pax-cdi.  This way, it should not be resolved.
>
> 2017-11-06 13:53 GMT+01:00 Michal Hlavac :
>
>> it means that there is bug in CXF, right?
>>
>>
>>
>> Because I can't build offline karaf distribution with cxf-3.2.0
>>
>> It searches for 1.0.0-SNAPSHOT on first start
>>
>>
>>
>> m.
>>
>>
>> On pondelok, 6. novembra 2017 13:11:28 CET Guillaume Nodet wrote:
>>
>> Those version resolution are done by pax-url-aether which uses the aether
>> library (the same one used by maven).
>>
>> I think the behavior is correct, see
>>
>>https://cwiki.apache.org/confluence/display/MAVENOLD/Depend
>> ency+Mediation+and+Conflict+Resolution
>>
>>
>> 2017-11-06 12:06 GMT+01:00 Michal Hlavac :
>>
>> Hi,
>>
>> I would like to ask about version resolving in repository element of
>> feature file.
>> I am asking because of line 21 in https://github.com/apache/cxf/
>> blob/cxf-3.2.1/osgi/karaf/features/src/main/resources/features.xml
>>
>> mvn:org.ops4j.pax.cdi/pax-cdi-features/[1.0.0.RC
>> 1,2)/xml/features
>>
>> When I download karaf 4.1.3 and start it, then execute command:
>> feature:list | grep pax-cdi
>> output show version 1.0.0.RC2
>>
>> But after feature:repo-add cxf 3.2.0, then re-run feature:list | grep
>> pax-cdi
>> it shows additional 1.0.0-SNAPSHOT
>>
>> I suspect that karaf cannot resolve RCx version sufix
>>
>> thanks, m.
>>
>>
>>
>>
>> --
>>
>> 
>> Guillaume Nodet
>>
>>
>>
>>
>
>
> --
> 
> Guillaume Nodet
>
>


-- 

Guillaume Nodet


Re: Why is org.apache.aries.blueprint.core.compatibility installed for some some features

2017-11-08 Thread Guillaume Nodet
Can you use feature:install --verbose --all-wiring and send us the output ?

2017-11-08 11:03 GMT+01:00 João Assunção :

> Hello all,
>
> In Karaf 4.1.3 when I do a feature:install one of my features
> the org.apache.aries.blueprint.core.compatibility bundle gets installed
> causing a refresh of all bundles and a Karaf shutdown.
>
> None of the bundles in the feature uses blueprint and the more exotic
> thing is that one of the bundles uses a Contional-Package instruction.
>
> What are possible reasons for this blueprint compatibility bundle to get
> installed ?
>
> Thanks.
>
> João Assunção
>
> Email: joao.assun...@exploitsys.com
> Mobile: +351 916968984 <+351%20916%20968%20984>
> Phone: +351 211933149 <+351%2021%20193%203149>
> Web: www.exploitsys.com
>
>
>


-- 

Guillaume Nodet


Re: Why is org.apache.aries.blueprint.core.compatibility installed for some some features

2017-11-08 Thread Guillaume Nodet
THis looks like a regression caused by KARAF-4932.
Could you please raise a JIRA for this issue ?

A workaround could be to modify the generation of you custom assembly so
that the blueprint compatibility bundle is installed by default.  At least,
it would avoid the unwanted refresh of all blueprint apps.

2017-11-08 17:16 GMT+01:00 João Assunção :

> Of course
>
> The feature I'm trying to install:
>
> karaf@root()> feature:info paybox-io-modbus
> Feature paybox-io-modbus 0.1.0.SNAPSHOT
> Description:
>   Modbus I/O implementation
> Feature has no configuration
> Feature has no configuration files
> Feature has no dependencies.
> Feature contains followed bundles:
>   mvn:pt.brisa.common/common-service-core/1.3.0
>   mvn:pt.brisa.paybox/io-api/0.1.0-SNAPSHOT
>   mvn:pt.brisa.paybox/io-modbus/0.1.0-SNAPSHOT
>   mvn:pt.brisa.paybox/io-commands/0.1.0-SNAPSHOT
> Feature has no conditionals.
>
> In attachment the output of feature:install
>
> Thank you.
>
> João Assunção
>
> Email: joao.assun...@exploitsys.com
> Mobile: +351 916968984 <+351%20916%20968%20984>
> Phone: +351 211933149 <+351%2021%20193%203149>
> Web: www.exploitsys.com
>
>
>
> On Wed, Nov 8, 2017 at 2:39 PM, Guillaume Nodet  wrote:
>
>> Can you use feature:install --verbose --all-wiring and send us the
>> output ?
>>
>> 2017-11-08 11:03 GMT+01:00 João Assunção :
>>
>>> Hello all,
>>>
>>> In Karaf 4.1.3 when I do a feature:install one of my features
>>> the org.apache.aries.blueprint.core.compatibility bundle gets installed
>>> causing a refresh of all bundles and a Karaf shutdown.
>>>
>>> None of the bundles in the feature uses blueprint and the more exotic
>>> thing is that one of the bundles uses a Contional-Package instruction.
>>>
>>> What are possible reasons for this blueprint compatibility bundle to get
>>> installed ?
>>>
>>> Thanks.
>>>
>>> João Assunção
>>>
>>> Email: joao.assun...@exploitsys.com
>>> Mobile: +351 916968984 <+351%20916%20968%20984>
>>> Phone: +351 211933149 <+351%2021%20193%203149>
>>> Web: www.exploitsys.com
>>>
>>>
>>>
>>
>>
>> --
>> 
>> Guillaume Nodet
>>
>>
>


-- 

Guillaume Nodet


Re: Problem with new JLine in Karaf 4.1.3?

2017-11-29 Thread Guillaume Nodet
Fwiw, the jira is KARAF-5497
<https://issues.apache.org/jira/browse/KARAF-5497>
Reverting the jline upgrade fixes the problem, so it may be the easiest
thing to do.


2017-11-28 18:06 GMT+01:00 Jean-Baptiste Onofré :

> Hi,
>
> AFAIR, we already have a Jira about that.
>
> Let us take a look.
>
> Regards
> JB
>
>
> On 11/28/2017 05:58 PM, afbagwe wrote:
>
>> Hey guys. I'm seeing a problem I didn't have in Karaf 4.1.2
>>
>> When I try to install the camel-cxf package I get the following error that
>> causes the Karaf client to crash to the command line:
>>
>> 2017-11-28T08:53:13,838 | ERROR | Karaf local console user karaf |
>> ShellUtil
>> | 42 - org.apache.karaf.shell.core - 4.1.3 | Exception caught while
>> executing command
>> java.io.IOError: java.io.IOException: Stream Closed
>> at org.jline.keymap.BindingReader.readCharacter(BindingReader.
>> java:142)
>> ~[49:org.jline:3.5.0]
>> at org.jline.keymap.BindingReader.readBinding(BindingReader.
>> java:109)
>> ~[49:org.jline:3.5.0]
>> at org.jline.keymap.BindingReader.readBinding(BindingReader.
>> java:60)
>> ~[49:org.jline:3.5.0]
>> at
>> org.jline.reader.impl.LineReaderImpl.readBinding(LineReaderImpl.java:724)
>> ~[49:org.jline:3.5.0]
>> at org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl
>> .java:526)
>> ~[49:org.jline:3.5.0]
>> at
>> org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(
>> ConsoleSessionImpl.java:349)
>> [42:org.apache.karaf.shell.core:4.1.3]
>> at java.lang.Thread.run(Thread.java:748) [?:?]
>> Caused by: java.io.IOException: Stream Closed
>> at java.io.FileInputStream.read0(Native Method) ~[?:?]
>> at java.io.FileInputStream.read(FileInputStream.java:207) ~[?:?]
>> at org.jline.terminal.impl.DumbTerminal$1.read(DumbTerminal.java:48)
>> ~[?:?]
>> at org.jline.terminal.impl.DumbTerminal$1.read(DumbTerminal.java:85)
>> ~[?:?]
>> at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
>> ~[?:?]
>> at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
>> ~[?:?]
>> at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) ~[?:?]
>> at sun.nio.cs.StreamDecoder.read0(StreamDecoder.java:127) ~[?:?]
>> at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:112) ~[?:?]
>> at java.io.InputStreamReader.read(InputStreamReader.java:168)
>> ~[?:?]
>> at org.jline.utils.NonBlockingReader.run(NonBlockingReader.java:276)
>> ~[?:?]
>> ... 1 more
>>
>> This is an easily reproducible error.
>>
>> 1. Start with clean Karaf 4.1.3 in the interactive shell
>> 2. add the camel repo (we use 2.18.5)
>> 3. the feature:install camel-cxf
>>
>> I saw that Karaf 4.1.3 upgraded JLine to 3.5. Perhaps this is the problem?
>>
>>
>>
>> --
>> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>>
>>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



-- 

Guillaume Nodet


Re: Confused about feature dependency="true"

2017-12-08 Thread Guillaume Nodet
So first, bundles are only updated / refreshed / restarted if necessary.
If there's no change for a given bundle, it won't be touched at all.

The dependency="true" flag means that the artifact can be used by karaf to
solve some constraints, but it won't be installed if not needed.
This is mainly useful for imported packages or services, not for the core
of your feature.
For example you write a feature that needs  apache commons-collections.  If
you don't add the dependency="true" flag on the bundle, it will always be
installed, even if a newer version of commons-collections is required by
another bundle, so you'll end up with 2 different versions of
commons-collections in use.  If you put the dependency="true" flag on the
commons-collections bundles, then the resolver deploy only the latest
bundle.

The same idea holds if you put the flag on a feature: i.e. it will be used
by the resolver, but if it is not needed, it may not be installed.

2017-12-08 10:27 GMT+01:00 Lukasz Lech :

> Hello,
>
>
>
> I’m quite confused about feature dependencies.  The documentation about
> provisioning doesn’t explain much how the dependency=”true” is expected to
> work. I’ve tried adding or removing it, but I experience unexpected
> behavior…
>
>
>
> I have a core feature and other features that depend on it. The system is
> modular, and there are many production endpoints with diverse feature sets.
>
>
>
> I’ll try to present oversimplified example, which hopefully will give a
> clue what could get wrong.
>
>
>
> I have , that is used by any other actor.
>
> I have  (API) that has 2 implementations, defined as
> features  and .
>
> I have  that uses only 
>
> I have  that uses  and .
>
>
>
> You can have only 1 implementation of service1. You can have one of 2
> endpoints, or both.
>
>
>
> The problem is, that when I install everything, I see in logs, many
> bundles defined in  or  are started many times. As
> would the installation of another ‘consumer’ of that feature cause feature
> to be reinstalled. What is worse, I’ve expected once infinite loop of
> triggered restarts.
>
>
>
> What I need is a stable way to define the dependency from B to A, telling
> that if B is going to be installed, and A is not present, A should be
> installed, but if A is already installed, it should be not re-installed
> neither re-started.
>
>
>
> So if I say, install service1-impl, the service1-core will be installed
> and started. If I later say, install endpoint2, the service1-core will not
> be touched, because it is already installed and running.
>
>
>
> Shoudn’t the dependency=”true” work that way? What can cause the common
> feature to be reinstalled many times by installing dependent features?
>
>
>
> Best regards,
>
> Lukasz Lech
>
>
>



-- 

Guillaume Nodet


Re: Confused about feature dependency="true"

2017-12-08 Thread Guillaume Nodet
It's all about constraints.
If something does not work when installed, it means that the constraints
are not correctly expressed.
If you use the maven bundle plugin for example, it should generate service
requirements for the various namespace handlers used.  Those will be used
at resolution time, unless you use an old feature namespace.

For the dependency flag, a rough heuristic is that bundles/features from
the project have dependency="false" while external ones have
dependency="true".

Also, make sure to at least use the karaf maven plugin to validate the
resolution of your features at build time.  At least, this will ensure that
all constraints are solved somehow.

2017-12-08 11:38 GMT+01:00 lechlukasz :

> Does the same appply to features?
>
> Because I've noticed an improvement, after marking the dependent features
> as
> dependency="true":
>
> service1-core
>
> However, if I've done the same for all my features, including system ones
> (jpa, transaction-api, transaction etc.) I've ended up with my bundles not
> starting because the blueprint namespace handlers were not installed...
>
> I've reached the point where the whole orchestration is very fragile. I
> need
> often to stop Karaf after installing all features and start it again, so
> that everything starts fine.
>
>
>
> --
> Sent from: http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html
>



-- 

Guillaume Nodet


<    1   2   3   4   5   6   >