Re: client.bat does not work in Karaf 4.1.5
We've seen the same thing. 4.1.3 had a problem, 4.1.5 has a different problem, 4.1.4 is the sweetspot at the moment. I can't recall the details, I'll dig them out. > On 05 March 2018 at 13:02 Jean-Baptiste Onofré wrote: > > > Hi Nicolas, > > the big issue is just Windows ;) > > Without kidding, it seems that the ParsedLine line is null in the Console > reader. > > It could be related to your terminal. What's your Windows version ? > > Can you try to cleanup etc/shell.init.script (especially around the color > settings) ? > Does it work with Karaf 4.1.4 ? > > Thanks ! > Regards > JB > > On 03/05/2018 01:56 PM, DUTERTRY Nicolas wrote: > > Hi, > > > > > > > > I have just downloaded Karaf 4.1.5 and it seems that, under Windows, it is > > not > > possible to obtain an interactive SSH session anymore with the script > > “client.bat”. > > > > The login is successful but when I press any key the SSH session ends. > > > > I have this error in logs : > > > > > > > > 2018-03-05T13:35:39,496 | ERROR | Karaf ssh console user karaf | > > ShellUtil| 43 - org.apache.karaf.shell.core - 4.1.5 > > | > > Exception caught while executing command > > > > java.lang.NullPointerException: null > > > > at > > org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:348) > > [43:org.apache.karaf.shell.core:4.1.5] > > > > at java.lang.Thread.run(Thread.java:748) [?:?] > > > > 2018-03-05T13:35:40,011 | WARN | sshd-SshServer[50ae478a]-nio2-thread-3 | > > ServerSessionImpl| 48 - org.apache.sshd.core - 1.6.0 | > > exceptionCaught(ServerSessionImpl[karaf@/127.0.0.1:65321])[state=Opened] > > IOException: The specified network name is no longer available. > > > > > > > > Do you have any idea what the issue is ? > > > > Regards, > > > > -- > > > > Nicolas Dutertry > > > > Sopra HR Software - http://www.soprahr.com/ > > > > > > > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com
Re: client.bat does not work in Karaf 4.1.5
There was an incompatibility I believe in 4.1.3, where a dependency introduced a breaking change in a point release that was picked up by karaf. 4.1.5 has similar problems, though I don't know what the exact cause is there yet. We jumped to 4.1.5 and then back down to 4.1.4 as that one worked. In 4.1.3/4.1.5, external SSH clients work, just not client.bat. Vanilla 4.1.5 download, start karaf.bat, start client.bat, can't login. > On 05 March 2018 at 13:54 t...@quarendon.net wrote: > > > We've seen the same thing. 4.1.3 had a problem, 4.1.5 has a different > problem, 4.1.4 is the sweetspot at the moment. I can't recall the details, > I'll dig them out. > > > On 05 March 2018 at 13:02 Jean-Baptiste Onofré wrote: > > > > > > Hi Nicolas, > > > > the big issue is just Windows ;) > > > > Without kidding, it seems that the ParsedLine line is null in the Console > > reader. > > > > It could be related to your terminal. What's your Windows version ? > > > > Can you try to cleanup etc/shell.init.script (especially around the color > > settings) ? > > Does it work with Karaf 4.1.4 ? > > > > Thanks ! > > Regards > > JB > > > > On 03/05/2018 01:56 PM, DUTERTRY Nicolas wrote: > > > Hi, > > > > > > > > > > > > I have just downloaded Karaf 4.1.5 and it seems that, under Windows, it > > > is not > > > possible to obtain an interactive SSH session anymore with the script > > > “client.bat”. > > > > > > The login is successful but when I press any key the SSH session ends. > > > > > > I have this error in logs : > > > > > > > > > > > > 2018-03-05T13:35:39,496 | ERROR | Karaf ssh console user karaf | > > > ShellUtil| 43 - org.apache.karaf.shell.core - > > > 4.1.5 | > > > Exception caught while executing command > > > > > > java.lang.NullPointerException: null > > > > > > at > > > org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:348) > > > [43:org.apache.karaf.shell.core:4.1.5] > > > > > > at java.lang.Thread.run(Thread.java:748) [?:?] > > > > > > 2018-03-05T13:35:40,011 | WARN | sshd-SshServer[50ae478a]-nio2-thread-3 | > > > ServerSessionImpl| 48 - org.apache.sshd.core - 1.6.0 | > > > exceptionCaught(ServerSessionImpl[karaf@/127.0.0.1:65321])[state=Opened] > > > IOException: The specified network name is no longer available. > > > > > > > > > > > > Do you have any idea what the issue is ? > > > > > > Regards, > > > > > > -- > > > > > > Nicolas Dutertry > > > > > > Sopra HR Software - http://www.soprahr.com/ > > > > > > > > > > > > > -- > > Jean-Baptiste Onofré > > jbono...@apache.org > > http://blog.nanthrax.net > > Talend - http://www.talend.com
Aries JAX-RS Whiteboard
For largely historical reasons we have ended up with a setup where we use the standard karaf HTTP whiteboard service, and then run jersey on top of that with our own homebrew whiteboard service to register JAXRS endpoints. I'm looking to replace this with a better solution, presumably based around the OSGi JAXRS whiteboard spec. and aries-jax-rs-whiteboard (https://github.com/apache/aries-jax-rs-whiteboard) since that now exists, which it didn't when we started out. Is aries-jax-rs-whiteboard compatible with karaf does any one know? Or does it depend on things that aren't provided, or rely on other things from later OSGi specs that it doesn't support? I'm finding I'm having to add in a bunch of bundles, and I'm wondering whether ultimately it's a dead end? Am I better off doing it another way? Karaf comes with CXF doesn't it? My preference is to use the official OSGi whiteboard, but if that's going to be too hard right now I'm not against doing it a CXF specific way. The only example I can find so far though looks something like this: @Component(service=TaskServiceRest.class, property={"service.exported.interfaces=*", "service.exported.configs=org.apache.cxf.rs", "org.apache.cxf.rs.address=/tasklistRest"}) Which seems, well, more complex that necessary in comparison to @Component(service=TaskServiceRest.class) @JaxrsResource What's the "best" route right now? It has to be declarative services based, and whiteboard pattern. Thanks.
Re: Aries JAX-RS Whiteboard
> You should then be able to get away with relatively few bundles. The JAX-RS > Whiteboard API, OSGi Promises + function, the Aries wrapping of the JAX-RS > API and the Aries JAX-RS Whiteboard implementation should be enough. This is > by far preferable to using CXF directly, where you don’t have proper resource > isolation, nor do you have a nice way to apply extensions (e.g. JSON support, > CORS headers, etc). So I've added those bundles (promise, function and the aries jaxrs spec bundle for the JavaJAXRS capability), the problem I now have is that it's missing the JavaAnnotation capability, version 1.3.0. I suspect I have something providing an earlier version of that, but at the moment my OSGi fu hasn't yielded the answer yet. Good to know though that I'm potentially on the right track.
Re: Aries JAX-RS Whiteboard
> that won't work out of the box as Karaf 4.2.x is still R6. > > It will work with Karaf 4.3.x that will be R7. > > In the mean time, I'm creating a very simply rest whiteboard pattern for > CXF. > It doesn't use all the JAXRS whiteboard spec, but just works fine for > most of the use cases. OK, thanks. That's appreciated. We're only doing simple things, registering resources and extensions (message body writer, request/response filters etc), but we only have one application. It's what I would have thought was pretty normal stuff to be honest. We do use SSE (server sent events) implementation from jersey at the moment, and also multipart, but it looks like CXF supports those, so I'm sure that'll be possible ;-) I'll hold off for now then. What's the timescale for Karaf 4.3? Thanks.
Re: Aries JAX-RS Whiteboard
> Honestly, it sounds like you’re about 30 minutes away from having the Aries > JAX-RS Whiteboard working... OK, Understand your reference to servicemix annotation earlier. I had to pick up the org.apache.felix.http.servlet-api-1.1.2.jar to get the JavaServlet contract version 3.1. I've now got karaf starting cleanly, and it's obviously doing *something*. I suspect if I created a simple example it would be working, but obviously I was naive and greedy and went straight for converting my entire app. I mean, what could go wrong? I say it's doing something, in that I can request an api and I get an error such as: java.lang.ClassNotFoundException: org.glassfish.jersey.internal.RuntimeDelegateImpl not found by javax.ws.rs-api but the important thing is that what's in the stack trace is my resource class. So it's registered the endpoint and routed it correctly, it's just I've got some references to jersey. I'll have to clean all that out and it'll probably be more successful. Thanks.
Re: Aries JAX-RS Whiteboard
> The SSE from JAX-RS 2.1 definitely works (client and server side) with the > Aries implementation, so hopefully that will give you everything that you > need. I have it all working now. I've had to make one or two changes though, as a result of the change from jersey to cxf. Generally, the implementation was pretty easy, it certainly works to use Aries JAX-RS Whiteboard within Karaf 4.1.2. Once I worked out what the required dependency bundles were, it was OK. Anecdotally the requests seem faster than using jersey as well, though I haven't done any testing on that. I battled with an issue for a while because I had two bundles providing the jaxrs API. I had the original one, plus I also had the org.apache.aries.javax.jax.rs-api one (required as it adds a required OSGi contract specification). That caused me some issues with bundles sometimes working and sometimes reporting "exposed to package via two dependency chain" issues, and huge startup times and memory use while it figured it out. That took me a while to iron out. An issue I failed to resolve was that we had some use of jaxrs http client. I never did manage to get it to use the CXF client implementation. The Aries JAX-RS whiteboard bundles the required parts of CXF within it, but I don't think they are accessible to use. I tried including the relevant parts of CXF, but couldn't get it all to work. It seemed to be a bundle initialisation order issue, in that the geronimo osgi locator component was being used to find the JAX-RS http client classes before it had been initialised. Maybe if I'd just included the complete CXF bundle it would all have worked OK, but that seemed overkill when all I wanted was the client, and when the Aries JAX-RS whiteboard implementation includes its own copy of CXF as well. Since we only had one class using it I just substituted it for a non-jax-rs http client and the problem went away. I encountered an issue with Aries JAX-RS Whiteboard that I will raise on github. It doesn't like "void" resource method results. You get: java.lang.NullPointerException at org.apache.aries.jax.rs.whiteboard.internal.cxf.PromiseAwareJAXRSInvoker.checkFutureResponse(PromiseAwareJAXRSInvoker.java:40) This has caused me a bit of rework to get round to be sure it was the problem. I also encountered a difference in behaviour between cxf and jersey. I had a resource component with a path of "/a", and another with a path of "/a/b". In CXF the second of these didn't seem to get matched. Instead I had to add a subresource locator method on the first to match "b" and return the second resource component. No big deal, and I don't actually know what the spec says is valid. I'm assuming that this is in CXF rather than the whiteboard. Apart from all of that, it worked fine. Now to see whether it all actually solves the reliability issues we were having with our own homebrew whiteboard. Thanks for the assistance.
Re: Aries JAX-RS Whiteboard
> Did you notice that the JAX-RS Whiteboard provides a ClientBuilder > (prototype scoped) service? > > e.g. > @Reference > ClientBuilder clientBuilder; > Ah, no, I hadn't. I had read the words, but obviously not understood the significance. Thanks for the pointer.
Re: Aries JAX-RS Whiteboard
> as I'm creating the Aries JAXRS feature for Karaf, I'm currently using > the one from Aries. That's the one I used.
Replacing or blacklisting felix fileinstall
I'm trying to experiment with an alternative way of loading up configuration. So my goal is to disable felix fileinstall and provide an alternative implementation of the org.apache.felix.cm.PersistenceManager interface. However, so far I'm having great difficulty either blacklisting fileinstall or replacing it. in etc/org.apache.karaf.features.xml I have tried: mvn:org.apache.felix/org.apache.felix.fileinstall However I can't get this to have any effect. I have traced through with a debugger, and the LocationPattern.matches method is returning true, and it appears to be doing the right thing at that level, but the bundle still starts. I can't see any log messages that might be relevant. I also tried using in that file too. I had slight success -- I can use: and can compile my own version of fileinstall 3.6.5 and I can see that it's being loaded in. I say can. I could. I did that this morning, though I seem unable to reproduce that now, I'm getting both versions of the bundle now. Anyway, I struggled to change the group and artifact to something else (wasn't changing any code, I was just changing the pom to change the maven coordinates it was building to and then trying to reference that in the replacement url). But as soon as I did that it went back to loading the original. Anyone got a recipe for providing an alternative implementation of file install? Thanks.
Re: Replacing or blacklisting felix fileinstall
> I've implemented a SQL mechanism for persisting configurations. I started > by trying to implement a custom persistence mechanism for Felix CM. This > didn't work (see > http://karaf.922171.n3.nabble.com/Custom-PersistenceManager-configurations-not-instantiating-components-td4052786.html#a4052799 > ). > > What I ended up doing was having a component which just interacted with > Configuration Admin (creating configurations at startup; updating the > database when modifications occur; deleting configurations at shutdown). > File install is still running - it creates files when my component creates > configurations, and updates & deletes them as necessary. > > The only downside I've found is the factory configurations get a new PID > every time Karaf starts (as you can't specify the pid for a new factory > configuration - though I understand this is possible in new versions of > Config Admin). That's the kind of thing I want to do. I have had success with a custom PersistenceManager before, but fileinstall got in the way, and there was a change in behaviour at some point that affected things (this is going back a year or so). So my first aim is to get fileinstall out of the way and put a simpler component in that will just statically load the config from the .cfg files in etc. Then I'll play with trying to save any dynamic changes. Thanks. Good to know that someone's had at least some kind of success with this sort of thing.
Re: Replacing or blacklisting felix fileinstall
> You should remove fileinstall from etc/startup.properties to remove it. So how would I do that from a maven build? We're using the karaf maven plugin, so the startup.properties file gets generated automagically. I'm guessing I'd have to either put a version of it in my filtered_resources/etc or unfiltered_resources/etc direct and hope that overwrites, or put a post process step in to remove the line? Anyway. That gives me a method of experimentation which is good. I'll probably worry about how to do it properly if I can get my preferred solution working otherwise.
Re: Replacing or blacklisting felix fileinstall
> I'm working on a Karaf Configuration persistence layer to "abstract" > fileinstall, (especially to deal with encryption, AWS keys, etc). It > will help you to implement your own backend. That will be awesome. Any targeted release date? Thanks.
Re: Replacing or blacklisting felix fileinstall
> You have to create a custom framework features that you use at > startupFeature instead of the "standard" karaf feature. OK I see. I've been reluctant to do this in the past as I have to make sure to track changes that occur in the versions of the "normal" one. But it sounds like the simplest solution here.
Re: Replacing or blacklisting felix fileinstall
Just as an update, this is what I've ended up doing, at least for a PoC. I struggled for a while to try and completely remove fileinstall. I'm sure it's probably possible, but in the end it was easier to "move it out of the way". In custom.properties, I set: * felix.fileinstall.enableConfigSave to stop it trying to write config files back * felix.fileinstall.dir to a bogus directory to stop it trying to read config in. This works OK, as long as you set: * felix.fileinstall.disableNio2 to true to make is use "simple" recursive directory scanning. Without this it tried to create a nio2 watcher service which fails due to the bogus directory. This failure is caught and then the code falls back to the non-noi2 version, so setting this just avoids that happening. * felix.fileinstall.noInitialDelay to false to avoid it doing an initial scan, which again would fail due to the bogus directory. * felix.fileinstall.poll to a very large number to avoid it doing a subsequent scan. I then created my own startup bundle and set the start level to the same value as was used for fileinstall. In this bundle I did my own initial read of the etc directory and loaded the configuration properties into ConfigurationAgmin. This is enough to get karaf to then start. I then created my own implementation of the felix configadmin PersistenceManager implementation and registered it as an OSGi service with the "name" property. I then set the felix.cm.pm system option in custom.properties to that name. This gave me a working basis for experimentation. Configuration properties are loaded initially, and then modifications are saved. You may be able to get away with the initial load of the cfg files and rely totally on the PersistenceManager, but I haven't been totally successful with that. You'd think it ought to work. As it is you get a slightly curious situation where you load the config files in and the PersistenceManager gets asked to save them again. The early start level causes some problems, you can't use declarative services, depending on other bundles such as json or yaml is slightly problematic, but otherwise this works fine. Assuming you can get over the early start level I don't see why writing to a database wouldn't work, but I've just been interested in reading and writing to files in a different way.
Waiting for a command to be available
I'm wanting to automate some setup of a karaf based product. I want to create a docker image that is pre-configured for internal testing. In order to do this I need to run some karaf shell commands. What I was naively hoping to do is do something like: /opt/karaf/bin/start && /opt/karaf/bin/client -u admin -p admin -f commands.txt && /opt/karaf/bin/stop or perhaps: /opt/karaf/bin/karaf < commands.txt However my problem is that the shell comes up before the commands I need are available to run. Any suggestions on how to deal with this? - I can't find any documentation on shell variables that might give me a return code I could check and loop on with a sleep. This would probably work, but feels crude. - A "does this command exist" command might be useful if I can then loop on its result. I can probably create such a command and put it in a bundle that I specify with a low start order to ensure it's available. - Crudely sleeping for a long period of time would work (most of the time) but is inefficient. Fiddle with the start level of the shell bundle so that it comes up last (I'm not even totally sure that would work -- since component activation happens asynchronously I suspect that things from earlier bundles are still happening while later bundles are then being started). Or is what I'm trying to do just not very sensible? Even if I created REST API endpoints do to it I'd have essentially the same issue, I could just write the logic in an external program of some kind, java, shell, whatever (repeatedly "ping" an endpoint until it doesn't return 404, then I know the endpoints are available). I'm guessing JMX would have essentially the same problem, in that I would have to start karaf then loop until the JMX beans become available. Note that I found what appears to be some old karaf documentation (https://svn.apache.org/repos/asf/karaf/site/production/manual/latest-3.0.x/scripting.html) that includes a script for waiting for a command to become available. Perfect! However that a) doesn't work and b) would appear be based on standard OSGI gogo shell commands by the looks of it, rather than karaf shell commands, which aren't registered as services. Thanks.
Re: Waiting for a command to be available
> No way to use etc/shell.init.script ? Sorry, not sure I understand how that would help. That's a way I could get shell commands to be run automatically I'm assuming. That's part of my problem. The main problem is establishing whether the commands are available to run. > FYI, I fixed an issue in client, now you can inject directly a script. By what means? The client appears to support a -f option that pipes commands in from a file. Is that what you are referring to? As I say, the main issue I have is waiting for the commands I need to be available, rather than actually how to invoke the commands
Re: Waiting for a command to be available
> By the way, you can also add a delay to have shell available. You can do > this by adding karaf.delay.console=true in etc/config.properties. That looks promising. Can I set that other than in etc/config.properties? Can I set it in custom.properties for example (no), or somewhere similar? I don't want to have to copy the complete config.properties file and ship a copy of it, that feels fragile. Thanks.
Re: Waiting for a command to be available
> FYI, I fixed an issue in client, now you can inject directly a script. Do you mean there's a way of using the "karaf" command and running a script? With the karaf.delay.console=true setting you can't pipe a script in to the karaf command, as it misinterprets the end of line as "Press Enter to open the shell now...". Using "start && client -f commands.txt && stop" a) seems unreliable and b) would seem to need a sleep anyway as otherwise the client tries to connect while karaf is still starting.
Re: Waiting for a command to be available
> Using "start && client -f commands.txt && stop" a) seems unreliable and b) > would seem to need a sleep anyway as otherwise the client tries to connect > while karaf is still starting. Note that trying the "-d" and "-r" options on the bin/client along with -f seems to result in either "miss the commands file" or "miss the attempts" depending on which way round you put the options (karaf 4.2.2). I'll have a read though the source and see if I can find out why.
Re: Waiting for a command to be available
> Did you try with 4.2.4 (as I did the fix on the client thread in this > version) ? Not sure what issue you have fixed (perhaps you could expand?) but the problem I have still remains. The problem I have is the number of arguments that are passed from the BAT file to java: ARGS=%2 %3 %4 %5 %6 %7 %8 %9 or ARGS=%1 %2 %3 %4 %5 %6 %7 %8 %9 I have 10 arguments (-u -p -d -r -f ) Therefore in both cases one or more arguments will be dropped before they get to java. Most confusing.
Re: Waiting for a command to be available
> Not sure what issue you have fixed (perhaps you could expand?) but the > problem I have still remains. Note also that the client appears to forget the password set on the command line if it has to retry (JIRA raised): > client -u admin -p admin -r 30 > Logging in as admin > retrying (attempt 1) ... > retrying (attempt 2) ... > Password:
Re: Waiting for a command to be available
> No way to use etc/shell.init.script ? I've understood this comment now. The best method seems to be to use the pattern of: set EXTRA_JAVA_OPTS=-Dkaraf.shell.init.script= -Dkaraf.delay.console=true bin\karaf (or equivalent) Then ensure that the file ends with "shutdown -f". This seems to work as long as there aren't any errors in the script, if none of the commands return an error. The error handling seems to surround the complete initialisation script execution, so if any exception is thrown nothing further in the script is executed. Starting the script with "shutdown -f 1" seems a sensible precaution to ensure that karaf doesn't hang forever in these circumstances. Anyway, this seems like a pattern than the bin\start & bin\client -f & bin\stop pattern.
Karaf 4.2.3 uses glassfish jaxb 2.3.2 which is java 9, is this a problem?
We've tried to upgrade to karaf 4.2.3/4 and have hit a problem. I don't believe we explicitly use org.glassfish.jaxb ourselves, but karaf has a dependency on it. Karaf 4.2.3 updated this dependency to 2.3.2, and that's a java 9 module. So with a fresh karaf 4.2.4, if you do: bundle:install -s 'wrap:mvn:org.glassfish.jaxb/txw2/2.3.2' you get: java.lang.ArrayIndexOutOfBoundsException: 19 at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:576) at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:494) at aQute.bnd.osgi.Clazz.parseClassFileWithCollector(Clazz.java:483) at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:473) at aQute.bnd.osgi.Analyzer.analyzeJar(Analyzer.java:2177) at aQute.bnd.osgi.Analyzer.analyzeBundleClasspath(Analyzer.java:2083) at aQute.bnd.osgi.Analyzer.analyze(Analyzer.java:138) at aQute.bnd.osgi.Analyzer.calcManifest(Analyzer.java:616) at org.ops4j.pax.swissbox.bnd.BndUtils.createBundle(BndUtils.java:161) at org.ops4j.pax.url.wrap.internal.Connection.getInputStream(Connection.java:83) This seems to be the symptom of "incorrect class version". We run on java 8 at the moment, but karaf is intended to support java 8 as far as I understand it. So is this an issue with karaf, or is there a dependency we need to update? Thanks.
Karaf, gogo shell?
I'm trying to get started with Karaf, and am having a few issues. I have created a simple OSGi enroute project using bndtools in eclipse. I have created a feature.xml file for it and have installed it in karaf. So far so good. The default project that bndtools generates includes a gogo command. It's just a "hello world". When I run this within eclipse under the normal OSGi environment there, I can run my command from the gogo shell. Naively I tried doing the same from within the karaf shell. No joy. I feel I'm back to square one in terms of diagnostics. Tools I used to rely on like "lb", "inspect capabilities" and so on don't exist as far as I can tell, so I can't really tell what might be going wrong. "bundle:diag" shows no unsatisfied requirements though. Should I expect such a command to work? Thanks.
Re: Karaf, gogo shell?
I started using Karaf yesterday. Version 4.0.6. Not aware of shell-compat. What does it do? I can't see it mentioned in any documentation. Naively tried installing it. Doesn't seem to change my ability to run a gogo shell command that I can see? > On 19 September 2016 at 17:08 Benson Margulies wrote: > > > What version of karaf? > > Did you install the shell-compat feature, which is required for gogo commands? > > > On Mon, Sep 19, 2016 at 12:06 PM, wrote: > > I'm trying to get started with Karaf, and am having a few issues. > > > > I have created a simple OSGi enroute project using bndtools in eclipse. I > > have > > created a feature.xml file for it and have installed it in karaf. So far so > > good. > > > > The default project that bndtools generates includes a gogo command. It's > > just a > > "hello world". > > > > When I run this within eclipse under the normal OSGi environment there, I > > can > > run my command from the gogo shell. > > > > Naively I tried doing the same from within the karaf shell. No joy. > > > > I feel I'm back to square one in terms of diagnostics. Tools I used to rely > > on > > like "lb", "inspect capabilities" and so on don't exist as far as I can > > tell, so > > I can't really tell what might be going wrong. > > "bundle:diag" shows no unsatisfied requirements though. > > > > Should I expect such a command to work? > > > > Thanks.
Deploying an application from jenkins for test
What's the best way of deploying an application from a CI build such as Jenkins, into Karaf, for the purposes of testing? I was hoping to set something up so that I have a jenkins job that builds my application and then deploys the built application to a test instance on karaf. The karaf would be running on a separate machine, so I would most likely write a simple piece of java, either as an ANT task or similar that uses the JMX management bean interface to interact with karaf (seems more reliable than trying to script the shell interface and detect errors). My question really is what the best was of transferring things was. Naively I assumed that I would publish the artifacts into artifactory as snapshots and then pull them from there using mvn: URLs, but maybe I just don't know enough about how maven repositories work, as I'm getting into difficulties with the artifacts being cached, so that what is deployed is what has just been built. I could probably pull them directly from jenkins using the "lastSuccessfulBuild/artifacts" URLs, and maybe that the best way? Any thoughts? Thanks.
Creating features.xml files
Up until now I've been developing code using bndtools in eclipse, writing bndrun files, resolving them, and running my application that way. The resolution step resolves all the requirements from the set of OBR repositories. All very easy (well, it is now, once I got over the learning curve :-) ) I'm now trying to deploy those same applications into karaf, and so wanting to write feature repository XML files the the same thing. So naively I seem to want to create a feature XML file that is the same as my resolved bndrun file. Is there an easy way to do this? Is there an easy way to write feature repository files, or do I have to maintain those separately? It also feels that the way I have to write the files depends on how I'm deploying. So if I want to deploy from a maven repository, my bundle references need to be in one form, but If I'm deploying directly from disk, my bundle references have to be in another form. Is there an easy way to manage the feature xml files? Thanks.
Re: Deploying an application from jenkins for test
> you can use Pax Exam[1] for that and start Karaf embedded, this will give > you a clean state for every run. Initially I want to do it for testing, but I then want to deploy a "snapshot" and "release" version of the actual application for general internal demo use. I currently already have an integration test suite, but what it doesn't test is the deployment into karaf, so I want to deploy it for testing in a way that is as close to the real thing as possible. > The other way would be to have a Karaf with Jolokia running and deploy via > JMX, I once created a sample for that [2]. As I say, my question was really one of what the transport is for the bundles. One way of the other I invoke the equivalent of "feature:add-repo", then "feature:install", my question is what are the URLs, where does karaf actually pull stuff from. To use mvn repositories I need to find a way of turning off the caching, which so far I've failed to do. Pushing the files on to the remote machine using SCP and then addressing them with "file:///" URLs seems another way, but it seems over complicated to push them. Addressing the artifacts in jenkins directly is another way, but it seems like I'm going to have to write something bespoke to create a feature repository with the correct URLs in from some kind of template (substituting the correct build number in all of the URLs). Just hoping there might be a known best way to do this kind of thing, save me figuring it out! Thanks.
OSGi Log Service?
According to the documentation, the OSGi Log service is supported. I'm trying to install a bundle that uses it, and I get: missing requirement [bundleid/1.0.0.201609201024] osgi.service; filter:="(objectClass=org.osgi.service.log.LogService)"; effective:=active]] My use is with DS, so: @Reference private LogService log; To me that says that there isn't an implementation of the LogService object that it can find to resolve. Am I misinterpreting? (Karaf 4.0.6). Thanks.
How do I resolve resolution problems in Karaf
I'm really struggling to get my bundles installed in Karaf, so I'd appreciate some hints on how to diagnose some issues. I'm trying to do a feature:install of a features.xml file I've written to install my bundles. My latest is: missing requirement osgi.wiring.package; filter:="(&(osgi.wiring.package=osgi.enroute.dto.api)(version>=1.0.0)(!(version>=2.0.0)))" [caused by: Unable to resolve osgi.enroute.base.api [62](R 62.0): missing requirement [osgi.enroute.base.api [62](R 62.0)] osgi.unresolvable; (&(must.not.resolve=*)(!(must.not.resolve=*)))]]] My interpretation of this is that I've got conflicting versions of something. I have no idea what, nor to figure out what the cause is. Up to now I've always just been using bndtools in eclipse (and the bundles I'm installing all work fine there), my first experience of Karaf was yesterday, so beyond what I've read in the docs, I know nothing about what useful commands there might be to help me diagnose. I don't even know how I would list what I've currently got installed that might satisfy osgi.enroute.dto.api or osgi.enroute.base.api. Any hints would be much appreciated. This seems to be extraordinarily more complicated that "resolve" in bndtools, or am I being naive? Thanks.
Re: Creating features.xml files
> On 20 September 2016 at 12:52 Benson Margulies wrote: > > > I build all my features with the karaf-maven-plugin. > I don't use Maven, I use eclipse and bndtools, hence gradle as my build environment. Since I didn't have to do anything at all to get the gradle command line build set up, it was just generated for me, I'm reluctant to start manually setting up a maven build environment. Ideally I want to just generate a feature xml file out of the bndtools environment somehow.
Re: How do I resolve resolution problems in Karaf
OK, that's really useful, I'll follow some of that up. As usual I went into this with the naive assumption that since Karaf was an OSGi container, I would just be able to drop my bundles in and it would all just work. Sadly things aren't quite as simple. > On 20 September 2016 at 13:19 David Daniel > wrote: > > > Tom integrating karaf development and bndtools development has been tricky > but it is getting better. Karaf development is centered around Mavens > build process while bndtools is centered around a custom workspace in cnf. > This release bndtools will be supporting maven and you can see the latest > post here https://groups.google.com/forum/#!topic/bndtools-users/VcQ2rsb--Pk > You can see how to include karaf features in a bndrun file in Christians > examples here https://github.com/cschneider/osgi-chat and the new cxf > example. What I do in my build is I include a features.bnd file where I > map karaf features to bndrun runrequires/runbundles statements and I > include that in my bndrun files. I have to separately maintain my > features.bnd and my features.xml. I do this so I can build both a single > jar deployable and run in karaf and pax-exam. Although the mixing of the > two build processes is hard it is becoming easier by the day. > > On Tue, Sep 20, 2016 at 7:45 AM, wrote: > > > I'm really struggling to get my bundles installed in Karaf, so I'd > > appreciate > > some hints on how to diagnose some issues. I'm trying to do a > > feature:install of > > a features.xml file I've written to install my bundles. > > My latest is: > > > > missing requirement osgi.wiring.package; > > filter:="(&(osgi.wiring.package=osgi.enroute.dto.api)( > > version>=1.0.0)(!(version>=2.0.0)))" > > [caused by: Unable to resolve osgi.enroute.base.api [62](R 62.0): missing > > requirement [osgi.enroute.base.api [62](R 62.0)] osgi.unresolvable; > > (&(must.not.resolve=*)(!(must.not.resolve=*)))]]] > > > > My interpretation of this is that I've got conflicting versions of > > something. I > > have no idea what, nor to figure out what the cause is. > > > > Up to now I've always just been using bndtools in eclipse (and the bundles > > I'm > > installing all work fine there), my first experience of Karaf was > > yesterday, so > > beyond what I've read in the docs, I know nothing about what useful > > commands > > there might be to help me diagnose. I don't even know how I would list > > what I've > > currently got installed that might satisfy osgi.enroute.dto.api or > > osgi.enroute.base.api. > > > > Any hints would be much appreciated. > > This seems to be extraordinarily more complicated that "resolve" in > > bndtools, or > > am I being naive? > > > > Thanks. > >
Resolution problem with osgi.enroute.dto.api
I'm tracking down a rather odd problem trying to deploy a bundle into Karaf. The issue appears to be with the osgi.enroute.dto.api package. I'm getting this resolution error from Karaf: missing requirement: osgi.wiring.package; filter:="(&(osgi.wiring.package=osgi.enroute.dto.api)(version>=1.0.0)(!(version>=2.0.0)))" So I don't have anything that provides the osgi.enroute.dto.api package. I have the Karaf obr festure installed, and it's set up to point to https://raw.githubusercontent.com/osgi/osgi.enroute/v1.0.0/cnf/distro/index.xml So I *think* it ought to just find whatever it needs by just looking in that OBR. Nice. Looking at the runbundles list in the bndrun file I have, it shows "osgi.enroute.dto.bndlib.provider". At a guess this is where bndtools has resolved that capability. Indeed, if I look at the MANIFEST for that jar, I see Export-Package: osgi.enroute.dto.api;version="1.0.0" If I use Karaf to list the information about that JAR though, it shows: Requires:package:(&(package=osgi.enroute.dto.api)(version>=1.0.0)(!(version>=1.1.0))) Karaf is just showing the information in the https://raw.githubusercontent.com/osgi/osgi.enroute/v1.0.0/cnf/distro/index.xml file here. So, on the one hand, the osgi.enroute.dto.bndlib.provider.jar file says it exports the package, but on the other hand the OBR index says that it has a requirement on that package. On the face of it, this seems like a contradiction. Indeed if I install that bundle into Karaf, it complains of: Unsatisfied Requirements: [osgi.enroute.dto.bndlib.provider [60](R 60.0)] osgi.wiring.package; (&(osgi.wiring.package=osgi.enroute.dto.api)(version>=1.0.0)(!(version>=1.1.0))) Equally if I create a very simple bundle that just has: @Reference private osgi.enroute.dto.api.DTOs dtos; I don't seem to be able to deploy it within Karaf, bundle:diag shows: Unsatisfied Requirements: [simple.dtousage [63](R 63.0)] osgi.wiring.package; (&(osgi.wiring.package=osgi.enroute.dto.api)(version>=1.0.0)(!(version>=2.0.0))) [simple.dtousage [63](R 63.0)] osgi.service; (objectClass=osgi.enroute.dto.api.DTOs) Is this an issue with the bundle? Or the bnd index? Or with karaf? Or am I misinterpreting? Thanks.
Re: Deploying an application from jenkins for test
Achim,Thanks. I'm trying to use mvn, as it does seem the best option. If I can get it to do what I want though. What I'm trying to work out is how to ensure that I get exactly the bundle that's just been published, and nothing gets cached. I'm publishing SNAPSHOT builds to our artifactory repository. With the default configuration, I don't get the latest snapshot. So I: install the feature Update the code. Rebuild. Publish uninstall, then reinstall the feature. I get the same bundle again. Now I don't know anything about maven works, but there seems to be an "update policy" that by default is "daily". The implication of this is that if I ran this cycle over two days I might get what I want. The PAX URL code seems to have a "globalUpdatePolicy" configuration. I set that to "always", and now I get a complaint from aether about not being able to resolve the artifact: Could not find artifact simple-osgi:simle.osgi.command:jar:1.0.0-20160921.072458-1 in artifactory-snapshot(repository URL) at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:444)[7:org.ops4j.pax.url.mvn:2.4.7] at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246)[7:org.ops4j.pax.url.mvn:2.4.7] at shaded.org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223)[7:org.ops4j.pax.url.mvn:2.4.7] at shaded.org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:294)[7:org.ops4j.pax.url.mvn:2.4.7] at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:650)[7:org.ops4j.pax.url.mvn:2.4.7] at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:598)[7:org.ops4j.pax.url.mvn:2.4.7] at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:576)[7:org.ops4j.pax.url.mvn:2.4.7] at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:550)[7:org.ops4j.pax.url.mvn:2.4.7] at org.apache.karaf.features.internal.download.impl.MavenDownloadTask.download(MavenDownloadTask.java:34)[8:org.apache.karaf.features.core:4.0.6] The one in the artifactory is now 073344 instead. So I don't understand what's going on here. Setting "globalUpdatePolicy" seems to have said "ignore the cached artefacts and go get them again from the remote", but what it doesn't seem to have said is "check for the latest SNAPSHOT version", so it's still trying to download the version that it worked out last time. As I say, my knowledge of Maven is minimal at best. There must be a way of doing this though isn't there? I mean, convince it somehow to just install whatever is latest on out artefactory repository? I get the caching thing when you're dealing with release versions, but in this environment, snapshots ought to not be cached, and attempting to get version "1.0.0-SNAPSHOT" ought to just go and figure out what the latest version is and download it (you can cache once you've resolved the SNAPSHOT into its actual unique version, but not before). Any ideas? Thanks.
Create and manager an instance using Jolokia
How do I create and then manage a new instance using jolokia? I can execute the commands on my root instance to create and start a new instance fine. On create I specify that I want "jolokia" as a feature. Fundamentally, in order to manage the second instance I have to connect directly to it don't I? This is fine with the RMI, as I get the opportunity to set the ports on instance creation. Despite the instance name being in the Mbean object names (the "name=[instance]" bit), I can't connect to the root instance and control the second instance simply by setting "name=second" (doesn't work anyway). So I'm going to need to set the HTTP port on the second instance to something other than default, otherwise I won't be able to connect, the second instance will attempt to bind to 8181, and then fail. I seem to have two ways I might be able to influence the new instance. The first is that I have the opportunity to pass in startup options when the instance is started. Can I influence the port that way? These are presumably java system options, but I'm not aware you can set OSGi configuration from system options can you? Or indeed that the HTTP service will pick up a specific system option? The second opportunity seems to be that on creating the instance, I can pass in feature URLs and features to install. Presumably I could pass in the URL of a feature XML file that contains a feature that just has configuration settings in it. But in order to do this, I have to have created that feature XML file though and have in a location that is addressable by a URL from the machine on which Karaf is running, I can't just pass the information in easily. Are there any other ways of achieving this that I've missed? Thanks for any input.
Setting config properties with JMX bean happens asynchronously?
For test purposes we have a small java program that automatically deploys our built bundles into a Karaf container as part of the build and test process. However, the deployment process is unreliable. Basically what we do is connect to the main Karaf container and create a new instance. Then connect to that instance, and set the mvn url search repositories to the repository into which we deploy the build artifacts and then do a feature install. We do this all through the JMX interface. The feature install often fails though, claiming: Error resolving artifact : Could not transfer artifact from/to apache (http://repository.apache.org/content/groups/snapshots-group/) What's odd about this message is that we've just changed the mvn url search path (org.ops4j.pax.url.mvn.repositories property in the org.ops4j.pax.url.mvn configuration pid) so that it doesn't even include that repository. So why is it looking there? Looking at the implementation of the config JMX bean it just calls the OSGi method Configuration.update. The actual config update though happens asynchronously on another thread (so says the documentation for Configuration.update). It is possible to install a configuration change listener in OSGi that can listen for completion of a configuration change to a certain pid, though I don't know how you tell it was YOUR change, rather than a change that someone else has made, but it's possible that would improve the situation. This makes life very awkward, and you would appear to have to reply on a fragile "sleep" in the client to give the configuration changes time to propagate before you continue, or having made the config changes, stop the instance and restart it again (I *think* that would work, I *think* that the configuration changes are persisted by Karaf synchronously). Other than making the config changes in the parent instance so that they are inherited by the child instance, I'm not aware of any other way to make the configuration changes in this kind of scenario. Is this a known issue? Or is this as designed? Thanks.
missing requirement osgi.contract=JavaServlet
I'm trying to use the Karaf maven plugin to build a custom Karaf distribution (so "karaf-assembly" packaging type) I'm stuck on the following error though: Failed to execute goal org.apache.karaf.tooling:karaf-maven-plugin:4.1.1:assembly (default-assembly) on project karaf-distro: Unable to build assembly: Unable to resolve root: missing requirement [root] osgi.identity; osgi.identity=core-features; type=karaf.feature; version=1.0.0.SNAPSHOT; filter:="(&(osgi.identity=core-features)(type=karaf.feature)(version>=1.0.0.SNAPSHOT))" [caused by: Unable to resolve core-features/1.0.0.SNAPSHOT: missing requirement [core-features/1.0.0.SNAPSHOT] osgi.identity; osgi.identity=mybundle; type=osgi.bundle; version="[1.0.0.201706120848,1.0.0.201706120848]"; resolution:=mandatory [caused by: Unable to resolve mybundle/1.0.0.201706120848: missing requirement [mybundle/1.0.0.201706120848] osgi.contract; osgi.contract=JavaServlet; filter:="(&(osgi.contract=JavaServlet)(version=3.1.0))"]] I have the org.apache.karaf.features:enterprise feature as a dependency (along with framework, standard, spring), so I believe that it should have the 3.1.0 servlet API, so I don't think it's an issue of requiring 3.1.0 when something earlier is installed. In development we use felix HTTP (we use bndtools, and that's just what it uses), and the requirement is satisfied by the Apache Felix Servlet API bundle. But Karaf uses pax-web instead, and nothing seems to provide that capability as far as I can tell (looking at output of bundle:headers in the console). Naively adding a maven dependency on org.apache.felix.http.servlet-api give me a different, earlier, error Unable to build assembly: [wrap/0.0.0] My bundle is built using bndtools, I'm afraid I don't know at the moment how that manifest requirement comes about, haven't managed to follow the whole chain through yet. So my question is, within Karaf, where do I get this dependency satisfied? Can I easily just substitute felix HTTP in place of pax-web on the basis that that's what we use in production? If so how? Naively adding felix HTTP as dependencies in my pom.xml just gives this "wrap/0.0.0" error, which means nothing to me. Thanks.
RE: missing requirement osgi.contract=JavaServlet
With regard to the "wrap/0.0.0" error, running Maven with -X gives me: Caused by: org.apache.karaf.features.internal.service.Deployer$CircularPrerequisiteException: [wrap/0.0.0] at org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:266) at org.apache.karaf.profile.assembly.Builder.resolve(Builder.java:1429) at org.apache.karaf.profile.assembly.Builder.startupStage(Builder.java:1183) at org.apache.karaf.profile.assembly.Builder.doGenerateAssembly(Builder.java:659) at org.apache.karaf.profile.assembly.Builder.generateAssembly(Builder.java:441) at org.apache.karaf.tooling.AssemblyMojo.doExecute(AssemblyMojo.java:506) at org.apache.karaf.tooling.AssemblyMojo.execute(AssemblyMojo.java:262) ... 22 more Suggesting a circular dependency issue somewhere, though quite where, who knows. There are one or two references to "pax-url-wrap" in the -X output, but that's all there is that mentions "wrap" at any point. Don't know whether that helps?
Re: missing requirement osgi.contract=JavaServlet
OK, I've "solved" this by creating an additional bundle that simply has the required: Provide-Capability: osgi.contract;osgi.contract=JavaServlet;version:Vers ion="3.1";uses:="javax.servlet,javax.servlet.http,javax.servlet.descrip tor,javax.servlet.annotation" line in the MANIFEST. I say "solved", the karaf assmembly now at least builds. I have yet to determine how successfully it actually runs. However, this seems like a gross hack to me. Shouldn't the pax-web http-api bundle provide this capability? Thanks.
Exception starting karaf custom distro "minimal example" from docs
I've attempted to build a simple custom Karaf distribution using the maven plugin and the karaf-assembly packaging. I can build the assembly, but when I start the result I get an exception. My pom configuration is currently very simple, just: org.apache.karaf.features framework ${karafVersion} kar org.apache.karaf.features standard ${karafVersion} features xml compile ... org.apache.karaf.tooling karaf-maven-plugin standard 1.8 karafVersion is 4.1.1 So I'm just bundling karaf, I'm not including anything else. This is the "minimal example" in the karaf docs. On startup I get the log here : https://gist.github.com/tomq42/47403ec44413fdc517c70dc3cccbf0cb I get the same if I specify "minimal" in "bootFeatures". Am I doing something wrong here? Thanks.
Re: Exception starting karaf custom distro "minimal example" from docs
> On 13 June 2017 at 09:08 Jean-Baptiste Onofré wrote: > > > Hi Tom, > > Here's a pom.xml to do what you want: As far as I can see that's the same as the "minimal example" in the docs. If I use your example, I still get errors: 2017-06-13T09:46:01,692 | ERROR | FelixDispatchQueue | pax-web-extender-whiteboard | 118 - org.ops4j.pax.web.pax-web-extender-whiteboard - 6.0.3 | FrameworkEvent ERROR - org.ops4j.pax.web.pax-web-extender-whiteboard org.osgi.framework.BundleException: Activator start error in bundle org.ops4j.pax.web.pax-web-extender-whiteboard [118]. at org.apache.felix.framework.Felix.activateBundle(Felix.java:2288) [?:?] at org.apache.felix.framework.Felix.startBundle(Felix.java:2144) [?:?] at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1371) [?:?] at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:308) [?:?] at java.lang.Thread.run(Thread.java:745) [?:?] Caused by: java.lang.ClassCastException: org.apache.felix.httplite.osgi.HttpServiceImpl cannot be cast to org.osgi.service.http.HttpService and 2017-06-13T09:50:23,076 | ERROR | paxweb-config-2-thread-1 | Felix | - - | Bundle org.apache.felix.inventory [36] EventDispatcher: Error during dispatch. (java.lang.ClassCastException: org.apache.felix.httplite.osgi.HttpServiceImpl cannot be cast to org.osgi.service.http.HttpService) and 2017-06-13T09:50:23,076 | ERROR | FelixDispatchQueue | inventory | 36 - org.apache.felix.inventory - 1.0.4 | FrameworkEvent ERROR - org.apache.felix.inventory java.lang.ClassCastException: org.apache.felix.httplite.osgi.HttpServiceImpl cannot be cast to org.osgi.service.http.HttpService etc. The "javax.servlet two chains" issue has gone. However that comes back if I specify "enterprise" instead of "standard" in my "bootFeatures" (investigating I had a "org.apache.karaf.features.cfg" file that specified some things that I had inherited from another example -- deleted that now). This clearly indicates to be that there are just some incompatible bundles there. org.ops4j.pax.web.pax-web-extender-whiteboard is expecting org.apache.felix.httplite.osgi.HttpServiceImpl to be a org.osgi.service.http.HttpService when it's not, or at least not the same org.osgi.service.http.HttpService class. Closing Karaf down and then starting it up again though, it still just hangs with no output of any kind, no logging.
Re: Exception starting karaf custom distro "minimal example" from docs
> On 13 June 2017 at 12:32 Jean-Baptiste Onofré wrote: > > > I don't understand why you have the pax-web feature. > > Do you have it defined in bootFeatures ? Or do you install it by hand ? > > I can confirm that I don't have pax-web feature on my custom distro. So I created a completely fresh directory, deleted the contents of my .m2 maven repository cache, created a pom.xml file with your example contents in, and ran "mvn clean install". I ran target/assembly/bin/karaf.bat. feature:list then lists pax-http as "started". Interestingly though I have no exceptions in the log file. Well that's interesting. Worryingly, deleting my .m2 directory contents seems to have altered the behaviour. Going back to my original example, it now works closer to how I would expect. I admit my knowledge of Maven is low, but it's a cache isn't it? Re-downloading the dependencies ought not to have changed anything? That worries me. The good news is that this seems to have fixed things. I no longer have odd exceptions in the log. Confidence in Karaf has increased, confidence in maven lowered! Thanks for the help.
Karaf termination caused by typing something incorrect on the gogo shell in 4.1.1
Start up Karaf with the "bin/karaf.bat" shell script. At the console type help bundle:info You get: gogo: NullPointerException: "in" is null! If I run this from the official 4.1.1 install, it looks like this is trying to "more" the help contents or something. I get a colon, and if you press q it goes back to the prompt. You get no help output though. If I do the same on 4.0.6, I get paginated help out, so something has changed there. Run this from a "custom assembly" consisting of the "standard" feature, and I get: 2017-06-13T14:33:11,173 | ERROR | Karaf local console user karaf | ShellUtil | 55 - org.apache.karaf.shell.core - 4.1.1 | Exception caught while executing command java.lang.NumberFormatException: For input string: "43B" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) ~[?:?] at java.lang.Integer.parseInt(Integer.java:580) [?:?] at java.lang.Integer.(Integer.java:867) [?:?] at org.fusesource.jansi.AnsiOutputStream.write(AnsiOutputStream.java:122) [86:org.fusesource.jansi:1.14.0] at java.io.FilterOutputStream.write(FilterOutputStream.java:125) [?:?] at java.nio.channels.Channels$WritableByteChannelImpl.write(Channels.java:458) [?:?] at org.apache.felix.gogo.runtime.Pipe$MultiChannel.write(Pipe.java:644) [55:org.apache.karaf.shell.core:4.1.1] at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) [?:?] at java.nio.channels.Channels.writeFully(Channels.java:101) [?:?] at java.nio.channels.Channels.access$000(Channels.java:61) [?:?] at java.nio.channels.Channels$1.write(Channels.java:174) [?:?] at java.io.PrintStream.write(PrintStream.java:480) [?:?] at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) [?:?] at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291) [?:?] at sun.nio.cs.StreamEncoder.flushBuffer(StreamEncoder.java:104) [?:?] at java.io.OutputStreamWriter.flushBuffer(OutputStreamWriter.java:185) [?:?] at java.io.PrintStream.write(PrintStream.java:527) [?:?] at java.io.PrintStream.print(PrintStream.java:669) [?:?] at java.io.PrintStream.println(PrintStream.java:806) [?:?] at org.apache.felix.gogo.jline.Posix._main(Posix.java:128) [55:org.apache.karaf.shell.core:4.1.1] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:497) ~[?:?] at org.apache.felix.gogo.runtime.Reflective.invoke(Reflective.java:136) [55:org.apache.karaf.shell.core:4.1.1] at org.apache.karaf.shell.impl.console.SessionFactoryImpl$ShellCommand.lambda$wrap$0(SessionFactoryImpl.java:195) [55:org.apache.karaf.shell.core:4.1.1] at org.apache.karaf.shell.impl.console.SessionFactoryImpl$ShellCommand$$Lambda$37/1313854807.execute(Unknown Source) [55:org.apache.karaf.shell.core:4.1.1] at org.apache.karaf.shell.impl.console.SessionFactoryImpl$ShellCommand.execute(SessionFactoryImpl.java:241) [55:org.apache.karaf.shell.core:4.1.1] at org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:68) [55:org.apache.karaf.shell.core:4.1.1] at org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:86) [55:org.apache.karaf.shell.core:4.1.1] at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:560) [55:org.apache.karaf.shell.core:4.1.1] at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:486) [55:org.apache.karaf.shell.core:4.1.1] at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:375) [55:org.apache.karaf.shell.core:4.1.1] at org.apache.felix.gogo.runtime.Pipe.doCall(Pipe.java:417) [55:org.apache.karaf.shell.core:4.1.1] at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:229) [55:org.apache.karaf.shell.core:4.1.1] at org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:59) [55:org.apache.karaf.shell.core:4.1.1] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:?] at java.lang.Thread.run(Thread.java:745) [?:?] 2017-06-13T14:33:11,177 | ERROR | Karaf local console user karaf | ShellUtil | 55 - org.apache.karaf.shell.core - 4.1.1 | Exception caught while executing command java.lang.NumberFormatException: For input string: "43BF" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) ~[?:?]
Re: Karaf termination caused by typing something incorrect on the gogo shell in 4.1.1
> On 13 June 2017 at 15:02 Jean-Baptiste Onofré wrote: > > > Hi Tom > > It has been fixed and will be included in 4.1.2. > Ah, good. 4.1 seems unusable as far as I can see otherwise. I've been looking around, but I can't find where I might get a snapshot build from, what repository I would need to point at? Sorry for all the questions. I have more! Thanks.
Re: Karaf termination caused by typing something incorrect on the gogo shell in 4.1.1
> On 13 June 2017 at 15:50 Jean-Baptiste Onofré wrote: > > > Do you have a chance to test with 4.1.2-SNAPSHOT ? That's what I'm wanting to do, I just can't work out where to get it. I naively put 4.1.2-SNAPSHOT as my karaf version in the pom, but it doesn't find it, as I presumably need to point at a snapshot repository somewhere. That was really my question. What URL should I use for the snapshot URL? Thanks.
Re: Karaf termination caused by typing something incorrect on the gogo shell in 4.1.1
Windows, yes. > On 13 June 2017 at 15:51 Jean-Baptiste Onofré wrote: > > > By the way, are you on Windows ? > > Regards > JB > > On 06/13/2017 04:44 PM, t...@quarendon.net wrote: > > > >> On 13 June 2017 at 15:02 Jean-Baptiste Onofré wrote: > >> > >> > >> Hi Tom > >> > >> It has been fixed and will be included in 4.1.2. > >> > > Ah, good. 4.1 seems unusable as far as I can see otherwise. > > > > I've been looking around, but I can't find where I might get a snapshot > > build from, what repository I would need to point at? > > > > Sorry for all the questions. I have more! > > Thanks. > > > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com
Re: Karaf termination caused by typing something incorrect on the gogo shell in 4.1.1
> On 13 June 2017 at 16:50 Jean-Baptiste Onofré wrote: > > > Yeah, you have to add the snapshot repository in your pom.xml: OK, yes, that's better. Thanks. Now all I need to do is work out why I can't apparently satisfy (&(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))" despite Karaf including SCR. Hmm. Karaf seems to have SCR 2.0.10. I don't know why I need <2, it's just what bndtools builds in as a requirement. Ah, confusion between API version and software version. from Karaf console, bundle:headers on felix.scr: osgi.extender;uses:=org.osgi.service.component;osgi.extender=osgi.component;version:Version=1.3 which is presumably what the requirement is referencing, so that ought to be satisfied. So no idea why that requirement can't be satisfied. Anyway, that's for tomorrow. Thanks.
org.ops4j.pax.url.wrap/2.5.2 missing org.slf4j package?
I'm trying to build a custom karaf distribution with the "karaf-assembly" maven packaging. I am making slow progress :-) My latest issue comes about through trying to resolve lack of a javax.servlet package requirement. So I naively included the org.apache.felix:org.apache.felix.http.servlet-api bundle as a dependency of the feature I'm including in my assembly. This is what we normally use in development in bndtools, but we normally use apache felix HTTP rather than pax-web generally. As soon as I do that I get an odd build failure: Unable to resolve org.ops4j.pax.url.wrap/2.5.2: missing requirement [org.ops4j.pax.url.wrap/2.5.2] osgi.wiring.package; filter:="(&(osgi.wiring.package=org.slf4j)(version>=1.6.0)(!(version>=2.0.0)))"] If I remove org.apache.felix.http.servlet-api it builds, and I can run the resulting karaf. My karaf shell fu isn't that great, but feature:list shows me that "wrap" is "started", and bundle:info mvn:org.ops4j.pax.url/pax-url-wrap/2.5.2/jar/uber shows me that it imports the org.slf4j package. I don't know how to find out where it resolves that from from the shell, but the web console shows me: Imported Packages org.slf4j,version=1.7.25 from org.ops4j.pax.logging.pax-logging-api (5) org.slf4j,version=1.6.6 from org.ops4j.pax.logging.pax-logging-api (5) org.slf4j,version=1.5.11 from org.ops4j.pax.logging.pax-logging-api (5) org.slf4j,version=1.4.3 from org.ops4j.pax.logging.pax-logging-api (5) So I don't understand why I'm suddenly getting a resolution failure for slf4j from pax.url.wrap. I also don't understand why including org.apache.felix:org.apache.felix.http.servlet-api suddenly causes this, nothing in the manifest for it would appear to indicate a need for it. All it is is a simple bundle that contains and exports the javax.servlet api. To sidestep the the question, how *should* I resolve the requirement for javax.servlet package? There doesn't seem to be a pax-web bundle that I can find that provides the javax.servlet package. Thanks (again).
Re: org.ops4j.pax.url.wrap/2.5.2 missing org.slf4j package?
> the easiest would be to actually remove that "new" requirement for those > components. > A fix for this is on it's way for Pax Web, so you'll have something that'll > work for you. Sorry Achim, you'll need to spell that out. What is that you'd fix? Create a pax-web bundle that contains and exports the javax.servlet packages? Thanks in advance of the fix though.
Why can I not satisfy "(&(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))"
I'm trying to build a custom karaf distribution using the maven karaf-assembly packaging type. My latest issue is that the build fails with missing requirement osgi.extender; filter:="(&(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))" I interpret this as meaning the bundle uses DS and therefore I need apache.felix.scr. If I don't add my bundle in, the build works, and when I start up the resulting karaf, feature:list shows that scr is installed and started. "bundle:headers mvn:org.apache.felix/org.apache.felix.scr/2.0.10" shows: Provide-Capability = osgi.extender;uses:=org.osgi.service.component;osgi.extender=osgi.component;version:Version=1.3 So I appear to have that capability don't I? So why does the build apparently fail in that way? My POM has: org.apache.karaf.features framework ${karafVersion} kar org.apache.karaf.features standard ${karafVersion} features xml compile and then: org.apache.karaf.tooling karaf-maven-plugin 4.1.1 true wrapper eventadmin standard webconsole http-whiteboard scr prereqs 1.8
Re: Why can I not satisfy "(&(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))"
> Can you share your sample project ? I will try to put together a simple standalone test case. In fact I'll do that for something else as well that I can't get past.
Karaf maven plugin/wrap/slf4j problem
I have put together a simple example of the problem I'm been encountering attempting to create a custom karaf distribution. If you attempt to include a bundle such as org.apache.felix:org.apache.felix.http.servlet-api in a feature, you get this build error: missing requirement [org.ops4j.pax.url.wrap/2.5.2] osgi.wiring.package; filter:="(&(osgi.wiring.package=org.slf4j)(version>=1.6.0)(!(version>=2.0.0)))" org.apache.felix.http.servlet-api has a "compile" dependency on org.apache.tomcat/tomcat-servlet-api/8.0.9 and this is interpreted by the karaf maven plugin as a dependency. Whether it should do that or not I don't know. It doesn't seem like it should, but that's not the issue. Having made that interpretation, it then adds a dependency on wrap:mvn:org.apache.tomcat/tomcat-servlet-api/8.0.9, since it isn't proper OSGi bundle. The build then fails with the above error. I don't understand how to resolve this issue. If I remove this and build karaf, in the result, I can see that wrap starts, and satisfies the requirement from the pax-logging bundle: Imported Packages org.slf4j,version=1.7.13 from org.ops4j.pax.logging.pax-logging-api (6) org.slf4j,version=1.7.7 from org.ops4j.pax.logging.pax-logging-api (6) org.slf4j,version=1.7.1 from org.ops4j.pax.logging.pax-logging-api (6) org.slf4j,version=1.6.6 from org.ops4j.pax.logging.pax-logging-api (6) org.slf4j,version=1.5.11 from org.ops4j.pax.logging.pax-logging-api (6) org.slf4j,version=1.4.3 from org.ops4j.pax.logging.pax-logging-api (6) So I don't get why it can't be resolved during the build. Is there some dependency I need to add to my feature in the features.xml file? It doesn't feel like I should, as my feature doesn't really depend on "wrap", and I shouldn't be concerned with what it itself then depends on? It feels like the dependency on tomcat-servlet-api is being added in error, but I could live with that if I could get the result to compile. See code at https://github.com/tomq42/karaf-tests-1 Thanks.
Re: Why can I not satisfy "(&(osgi.extender=osgi.component)(version>=1.3.0)(!(version>=2.0.0)))"
Oddly I can no longer reproduce the issue. I thought it might have gone away because I'd added: scr into my feature in the features.xml file. However I take that out again and it still works. Odd. If I encounter it again, I'll reduce to a simple test case. > On 14 June 2017 at 09:45 Jean-Baptiste Onofré wrote: > > > That would be great. > > In the mean time, I'm testing with the karaf-samples I prepared for the dev > guide update. > > Regards > JB > > On 06/14/2017 10:24 AM, t...@quarendon.net wrote: > >> Can you share your sample project ? > > > > I will try to put together a simple standalone test case. In fact I'll do > > that for something else as well that I can't get past. > > > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com
Re: Karaf maven plugin/wrap/slf4j problem
> I propose to share the code and chat directly (on hangout/skype/private > e-mail). Any help you can give me would be greatly appreciated. I have many issues that I'm trying to resolve, as you can tell.
Does hibernate work in karaf 4.1 (it does in 4.0)?
I've been struggling to get our code that uses Hibernate to work within Karaf 4.1 today. I have created a simple test case, which is on github: https://github.com/tomq42/karaf-tests-hibernate. There are a variety of branches, each with a different combination of hibernate and karaf. The branches karaf_4.0.9 and hibernate_5.3.4_karaf_4.1.1 are the most interesting. My test is to have a bundle with an activator, where the start method just does: javax.validation.Validation .byProvider(HibernateValidator.class) .providerResolver(new MyProviderResolver()) .configure() .buildValidatorFactory() .getValidator(); I write the java with bndtools in Eclipse. I then run the gradle script provided by bndtools to publish the resulting bundle to my local maven repository. I then run maven to build a feature containing the bundle, and then use the karaf-assembly packaging to product a karaf distribution, including the karaf "enterprise" feature repository, and hence its "hibernate-validator" feature. I then run the distribution, and it either says "great, created a validator", or prints a nasty stack trace. If I try this with karaf 4.1.1, I can't get it to work. It fails with javax.validation.ValidationException: HV000183: Unable to initialize 'javax.el.ExpressionFactory'. Check that you have the EL dependencies on the classpath ... Caused by: java.lang.ClassNotFoundException: com.sun.el.ExpressionFactoryImpl not found by org.hibernate.validator The thing is that I try with Karaf 4.0.9, and I *can* get it to work. Same code, just compiled against different versions of hibernate, so that the requirements match the version of karaf. In both cases, I have configured Eclipse/bndtools to use the same version of hibernate as that version of Karaf ships with. Karaf 4.0 series seems to use hibernate 5.0.3, Karaf 4.1 seems to use hibernate 5.3.4. >From within eclipse I can run OSGi using bndtools. If I compile and run >against Hibernate 5.0, and ensure that the glassfish.javax.el bundle is >available at runtime, I can successfully create a validator. I can then build >the same bundle and build my Karaf 4.0 based distribution, and the result >runs. I can use hibernate 5.2 as well, and that works within bndtools (I'm not >sure I would know how to make karaf use a later version of hibernate than the >one it ships with). If I build and run against hibernate 5.3 within bndtools, I get the error (java.lang.ClassNotFoundException: com.sun.el.ExpressionFactoryImpl), and I get the same if I then build the java and build my karaf 4.1 based distribution. Note that in both cases I have the glassfish.javax.el available, so the com.sun.el classes should be available. For a while I was convinced that it was a problem with boot class delegation, since com.sun.* is in the org.osgi.framework.bootdelegation setting in the config.properties file. I wondered whether because of that boot class delegation, it wouldn't attempt to load the classes out of the bundle, and instead try to load them from the boot classloader. However the same is true in Karaf 4.0.9, and it works fine there. So I think that was a red herring. So something seems to changed within hibernate to render it incompatible with Karaf 4.1? So I'm out of ideas. The fact that it works in 4.0.9 suggest that fundamentally I'm not doing anything wrong, and that something has changed in hibernate to cause this to now fail. I found an old thread on this list on which Christian had posted a link to some test cases for Hibernate, here: https://github.com/hibernate/hibernate-validator/blob/master/osgi/integrationtest/src/test/java/org/hibernate/validator/osgi/integrationtest/OsgiIntegrationTest.java. I have experimented with using context class loaders, as per the test case, in the hope that this might fix it, but it doesn't seem to. Can anyone confirm whether hibernate works on 4.1.1, and if so, what I'm doing wrong? I'm hoping I've just missed something obvious, but I've looked at this every which way I can and can't find anything. Thanks (again).
Using maven bundle plugin to compile and bundle code not in src/main/java
Forgive my maven ignorance, but I’d like to retrofit some source with a maven build that uses the felix maven bundle plugin, it’s just my source isn’t in src/main/java. Can I do that? If so, can someone provide a simple example? It’s not clear how the source even gets compiled to me. I’ve used the karaf-bundle-archetype maven archetype but no where is there anything that says "go compile this code", so I have no handle on what to modify. I'm vaguely guessing I need another "plugin" section but I'm shooting wildly in the dark there. Thanks.
Writing commands for karaf shell.
There was a thread recently related to this, but I have a different question. I'm confused about the shell situation, and what is "standard" and what is not. I have naively written some commands for the felix gogo shell. We develop using bndtools (obviously) and use the OSGI enRoute templates, and that's all set up to use the apache felix gogo shell implementation and related bundles. The commands I have written make use of what I *thought* were standard features such as org.apache.felix.service.command.Descriptor and org.apache.felix.service.command.Parameter parameter annotations and the org.apache.felix.service.command.CommandSession interface (though the fact that they are in the felix namespace did confuse me). The commands are registered using the osgi.enroute.debug.api.Debug constants. If I try and run these inside Karaf it clearly understands that the command is there, so the registration is working, it's just it obviously doesn't understand the Parameter annotations and CommandSession interface, so can't call it ("cannot cooerce ..." error). >From the examples, Karaf has its own, non standard way of writing command >extensions, it doesn't seem to use the osgi.enroute.debug.api.Debug constants. So can someone clarify the situation? Is *anything* actually standard at all? I can't find any reference in the OSGi specs to a shell (but then I'm probably not looking for the right thing), so perhaps not. Does karaf use the apache felix gogo shell implementation (I thought it did)? If so, should it be able to understand things like the Parameter annotation, and CommandSession? Or should this all work, but it's just that I'm missing some vital bundle in my installation? Thanks.
Re: Writing commands for karaf shell.
Yes, but what's the actual situation from a standards point of view? Is a shell defined by a standard at all? OSGi enroute seems to require a gogo shell and appears to rely on felix gogo shell command framework. Is it just that Karaf happens to ship a shell that happens to be based on the felix gogo shell (or perhaps not, but stack traces seem to suggest so), but that basically if I want to implement a shell command I have to implement it differently for each shell type? That seems a poor situation and leaves me with having to implement one command implementation to be used in the development environment and one that is used in the karaf deployment. Originally I thought that Karaf was the "enterprise version of felix". This doesn't seem to be the case? There *could* be a really powerful environment and ecosystem here, if it was all a *little* bit less fragmented :-) > On 21 July 2017 at 11:01 Jean-Baptiste Onofré wrote: > > > From a karaf perspective, the standard is to use karaf commands/annotations. > Gogo commands support is just for compatibility (as the features are > limited). When gogo commands will improve and provide the same the same > features, it could change. > > Others may have different standpoint but there's mine ;) > > Regards > JB > > On Jul 21, 2017, 11:54, at 11:54, t...@quarendon.net wrote: > >There was a thread recently related to this, but I have a different > >question. > > > >I'm confused about the shell situation, and what is "standard" and what > >is not. > > > >I have naively written some commands for the felix gogo shell. We > >develop using bndtools (obviously) and use the OSGI enRoute templates, > >and that's all set up to use the apache felix gogo shell implementation > >and related bundles. > >The commands I have written make use of what I *thought* were standard > >features such as org.apache.felix.service.command.Descriptor and > >org.apache.felix.service.command.Parameter parameter annotations and > >the org.apache.felix.service.command.CommandSession interface (though > >the fact that they are in the felix namespace did confuse me). The > >commands are registered using the osgi.enroute.debug.api.Debug > >constants. > > > >If I try and run these inside Karaf it clearly understands that the > >command is there, so the registration is working, it's just it > >obviously doesn't understand the Parameter annotations and > >CommandSession interface, so can't call it ("cannot cooerce ..." > >error). > > > >From the examples, Karaf has its own, non standard way of writing > >command extensions, it doesn't seem to use the > >osgi.enroute.debug.api.Debug constants. > > > >So can someone clarify the situation? > >Is *anything* actually standard at all? I can't find any reference in > >the OSGi specs to a shell (but then I'm probably not looking for the > >right thing), so perhaps not. > >Does karaf use the apache felix gogo shell implementation (I thought it > >did)? > >If so, should it be able to understand things like the Parameter > >annotation, and CommandSession? > > > >Or should this all work, but it's just that I'm missing some vital > >bundle in my installation? > > > >Thanks.
Re: Writing commands for karaf shell.
> If you look at Karaf >= 4.1.x, a bunch of commands are not coming from > Karaf anymore, but from Gogo or JLine. I moved them when working on the > gogo / jline3 integration. The main point that was blocking imho is that > they did not have completion support. With the new fully scripted > completion system from gogo-jline, gogo commands can have full completion, > so I don't see any blocking points anymore. It's just about tracking > commands and registering them in the karaf shell. I'm sorry, but I don't really understand what you're saying. You're talking about impediments to making changes to Karaf? Or how I go about writing commands? Sorry, just not following. Fundamentally, should commands that I write using apache felix gogo command features such as the Parameter and Description annotations, and the CommandService interface work? Or if I want to do something other than a simple "hello world", do I need to work out how to use the karaf shell from within bndtools so that I can write commands using the Karaf command framework? Thanks.
Console branding doesn't affect "client" console?
I'm running this on Windows, I don't know whether the same is true on Linux. I'm using Karaf 4.1.2. If I run the karaf.bat file, I see my branding from my etc/branding.properties file appear. If I then run the client.bat file (or a standard SSH client), I see the normal Karaf logo. The effect is that when I've installed karaf as a service, and so only use the "client.bat" script, I don't get a branded console, which seems to defeat the idea. Can I get a branded console in this environment? Thanks.
Karaf security framework and access to OSGi services
The Karaf user guide section 4.1 says: The Apache Karaf security framework is used internally to control the access to: the OSGi services (described in the developer guide) However the developer guide doesn't say anything that I can see about what that means. 5.15 in the Karaf user guide talks about how to set things up, the different login modules etc. It doesn't say anything further that implies that somehow you can apply access control to OSGi services. Am I just misinterpreting the initial statement? Thanks.
Console role based access control and command completion
If I'm logged on to the console as user, the list of commands I can execute is controlled by access control lists. So, if I'm logged on as a user who has only got the "viewer" role, then I can't shut karaf down, the system:shutdown command requires the "admin" role. Great. However, I still appear to be able to get command completion that system:shutdown is a command, but when I try and invoke it I get "Command not found: system:shutdown", which seems confusing. Is this intentional? I saw a comment in the code somewhere (lost it now) that made me think that the intention was that only commands I can actually invoke are then put in the completion list, and indeed that would seem like reasonable behaviour.
Default access control list for commands in the console
I just wanted to check that in the absence of an explicit access control list for commands in the console, the default is to be to allow anyone access. Can that be altered at all? Is there a way of providing a default access control list at all? Or do I have to make sure I provide one for each command scope that I create?
Potential security issue with default karaf console access control lists?
Any user that can log on to the karaf console appears to be able to run the "shell:cat" command (among others), and hence view any file that the operating system user that's running the karaf process can see. Whilst there is access control on a few of the shell scope commands, it seems that the default access control allows any user to run things with no explicit access control. This *feels* like a security issue to me. I'd like to be able to restrict access to the shell completely, but from experiment and looking at the code it appears that anyone who has some kind of "role" assigned to them (either directly, or as a member of a group) appears to be able to connect to the karaf console, and hence can potentially navigate the visible filesystem. This doesn't feel very desirable. It seems a shame that I can no longer restrict access to the console using the "sshRole" configuration property (still referenced in the documentation), but it seems that was removed when the role based access control was introduced. Other than physically restricting access to the SSH port, are there other ways I can restrict access to the console? Or do I need to develop my own access control list for the shell scope, and accept that all users can potentially access the console? Thanks.
Re: Default access control list for commands in the console
Well that was a supplemental question actually. Within a single access control list for a particular scope, I can presumably provide a catch all "*" entry at the bottom of that file, so that I can define a default access control list for all commands *within that scope*, but can I provide one that applies to *all* scopes? The access control list file is selected based on the scope of the command, so it doesn't seem obvious how I would define an access control list that applied to multiple scopes. > On 31 August 2017 at 13:03 Jean-Baptiste Onofré wrote: > > > You can use regex (like *) to match on all "other" commands. > > Regards > JB > > On 08/31/2017 12:38 PM, t...@quarendon.net wrote: > > I just wanted to check that in the absence of an explicit access control > > list for commands in the console, the default is to be to allow anyone > > access. > > Can that be altered at all? Is there a way of providing a default access > > control list at all? Or do I have to make sure I provide one for each > > command scope that I create? > > > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com
Re: Console role based access control and command completion
Hmm, OK. There's a comment somewhere that implies that someone had at least at some point tried doing that or thought that was what happened. It leads to *slightly* odd behaviour, of being told that a command exists, but then being told, "oh wait, not it doesn't". Thanks anyway. > On 31 August 2017 at 13:02 Jean-Baptiste Onofré wrote: > > > Hi Tom, > > We don't use the ACL in the completers, only on the action step. That's why > you > can complete but not execute. > > Regards > JB > > On 08/31/2017 12:35 PM, t...@quarendon.net wrote: > > If I'm logged on to the console as user, the list of commands I can execute > > is controlled by access control lists. > > So, if I'm logged on as a user who has only got the "viewer" role, then I > > can't shut karaf down, the system:shutdown command requires the "admin" > > role. > > > > Great. > > > > However, I still appear to be able to get command completion that > > system:shutdown is a command, but when I try and invoke it I get "Command > > not found: system:shutdown", which seems confusing. > > > > Is this intentional? I saw a comment in the code somewhere (lost it now) > > that made me think that the intention was that only commands I can actually > > invoke are then put in the completion list, and indeed that would seem like > > reasonable behaviour. > > > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com
Re: Potential security issue with default karaf console access control lists?
OK, JIRA created. I realise it's worse. I can *write* to files with "tac", so I can rewrite the users.properties file if I want to and create new users with admin priviledges. Cool. > On 31 August 2017 at 13:04 Jean-Baptiste Onofré wrote: > > > I agree: as we did for vi/edit command, we should limit cat to admin role. > > Can you create a Jira about that ? > > Thanks ! > Regards > JB > > On 08/31/2017 01:01 PM, t...@quarendon.net wrote: > > Any user that can log on to the karaf console appears to be able to run the > > "shell:cat" command (among others), and hence view any file that the > > operating system user that's running the karaf process can see. Whilst > > there is access control on a few of the shell scope commands, it seems that > > the default access control allows any user to run things with no explicit > > access control. > > > > This *feels* like a security issue to me. > > > > I'd like to be able to restrict access to the shell completely, but from > > experiment and looking at the code it appears that anyone who has some kind > > of "role" assigned to them (either directly, or as a member of a group) > > appears to be able to connect to the karaf console, and hence can > > potentially navigate the visible filesystem. This doesn't feel very > > desirable. > > > > It seems a shame that I can no longer restrict access to the console using > > the "sshRole" configuration property (still referenced in the > > documentation), but it seems that was removed when the role based access > > control was introduced. > > > > Other than physically restricting access to the SSH port, are there other > > ways I can restrict access to the console? Or do I need to develop my own > > access control list for the shell scope, and accept that all users can > > potentially access the console? > > > > Thanks. > > > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com
Developing for karaf
I'm trying to get an idea of how people go about developing for karaf, from a practical point of view. So karaf is maven focused. There are archetypes for creating command, general bundles and so on. I can then use maven to generate some eclipse project files that allow me to write and compile my code within eclipse. I guess if I need extra dependencies, I have to edit my pom and hopefully eclipse picks this up (never really done serious maven development, so I don't know how this process really works). When I want to try something out, I have to perform a maven build, start up a copy of karaf, install the bundle (or bundles) into it, then try out my new code? All from the command line? What about debugging? You start karaf with the "debug" option and then remotely connect eclipse to the karaf instance so that you can then place breakpoints and step through the code if necessary? Does it just magically find all the source code? Just trying to get a picture of what the expected workflow is and whether I'm missing anything. We're used to doing things in bndtools where you've got eclipse tooling for everything, so I'm just trying to do a mental reset really on what a "karaf/maven centric" development environment and process would look like. (I'm aware of the "Integrate Apache Karaf and Bnd toolchain" article, but we've had limited success with it beyond simple "hello world" examples. Maybe we just need more perseverance). Thanks.
Re: Developing for karaf
OK, thanks. I'm really actually quite passionate about this whole area. The OSGi community is too small to fragment, and there's an excellent tool in the shape of bndtools (gradle based), but it doesn't "deploy" to anything. It's just super frustrating that there's no clear integration path between that and karaf (maven based). There would be an awesome environment if that could be nailed, so that it operated a bit like developing j2ee webapps for a container like tomcat, which seems roughly analogous (integration of running karaf within eclipse itself, automatic deployment of the bundles into the container and so forth, along with the ease of build and dependency configurations that bndtools has). Anyway. I'll experiment with a "pure karaf" environment and see where it gets me. You've confirmed my basic understanding of the current situation anyhow. > On 05 September 2017 at 13:07 Jean-Baptiste Onofré wrote: > > > Hi Tom, > > You can also create your own Karaf custom distro. > > We are also discussing about Karaf Boot to simplify the bootstrapping/ramp up > on > Karaf. Short term, we are working on an improved dev guide. > > Back on your question: > 1. From a dev perspective, you can have a running karaf instance, you just do > mvn install on your bundle, and thanks to bundle:watch, it's automatically > updated in Karaf (not need to perform any command). > 2. For debugging, you are right: just start karaf in debug mode (bin/karaf > debug), it binds port 5005 by default, and then, connect your IDE remotely. > > Regards > JB > > On 09/05/2017 01:48 PM, t...@quarendon.net wrote: > > I'm trying to get an idea of how people go about developing for karaf, from > > a practical point of view. > > > > So karaf is maven focused. There are archetypes for creating command, > > general bundles and so on. > > I can then use maven to generate some eclipse project files that allow me > > to write and compile my code within eclipse. I guess if I need extra > > dependencies, I have to edit my pom and hopefully eclipse picks this up > > (never really done serious maven development, so I don't know how this > > process really works). > > > > When I want to try something out, I have to perform a maven build, start up > > a copy of karaf, install the bundle (or bundles) into it, then try out my > > new code? All from the command line? > > What about debugging? You start karaf with the "debug" option and then > > remotely connect eclipse to the karaf instance so that you can then place > > breakpoints and step through the code if necessary? Does it just magically > > find all the source code? > > > > Just trying to get a picture of what the expected workflow is and whether > > I'm missing anything. We're used to doing things in bndtools where you've > > got eclipse tooling for everything, so I'm just trying to do a mental reset > > really on what a "karaf/maven centric" development environment and process > > would look like. (I'm aware of the "Integrate Apache Karaf and Bnd > > toolchain" article, but we've had limited success with it beyond simple > > "hello world" examples. Maybe we just need more perseverance). > > > > Thanks. > > > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com
Creating a karaf feature containing a karaf shell command breaks karaf
Clearly this can't be true, since karaf ships with features containing bundles containing commands, but I can't get it to work. I've created a simple karaf shell command, following the tutorial in the documentation. As per the example, it just says "hello world". If I build that as an isolated bundle, install it into karaf and run it, it works, as per the documentation. What I'm trying to do though is build that into a feature to include into a custom karaf distribution. I can build the feature, and I can manually install the feature into a karaf distribution, and it works OK, I can run the resulting command. I can build the karaf distribution containing the feature OK, but when I then run the resulting karaf then fails to initialise properly. In the log file, I get: Adding features: Changes to perform: Region: root Bundles to install: ... mvn:org.apache.karaf.shell/org.apache.karaf.shell.console/4.1.2 null <-- This is clearly bad mvn:org.apache.karaf.shell/org.apache.karaf.shell.ssh/4.1.2 ... Installing bundles: ... null <-- This is the same null as before and causes the problem below. Error installing boot features java.lang.IllegalStateException: Resource has no uri at org.apache.karaf.features.internal.service.Deployer.getBundleInputStream(Deployer.java:1460) [10:org.apache.karaf.features.core:4.1.2] at org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:766) [10:org.apache.karaf.features.core:4.1.2] at org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1233) [10:org.apache.karaf.features.core:4.1.2] at org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$0(FeaturesServiceImpl.java:1132) [10:org.apache.karaf.features.core:4.1.2] at org.apache.karaf.features.internal.service.FeaturesServiceImpl$$Lambda$15/951619949.call(Unknown Source) [10:org.apache.karaf.features.core:4.1.2] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:?] The console then goes funny, and you get: Error in initialization script: etc\shell.init.script: String index out of range: 0 which I'm assuming is a knockon issue. I've got no idea where this "null" bundle is coming from, but it's clearly causing the karaf initialisation to go wrong. I have created an example project at https://github.com/tomq42/karaf-command-feature which I hope shows the problem. Just build with maven, then run the karaf-distro\target\assembly\bin\karaf(.bat) command. You should see the error above on the console, and the error in the log. Any insight would be very welcome. Thanks.
Re: Creating a karaf feature containing a karaf shell command breaks karaf
There's a complete example here: https://github.com/tomq42/karaf-command-feature Thanks. > On 07 September 2017 at 09:15 Jean-Baptiste Onofré wrote: > > > Hi Tom, > > can you share the pom.xml you use to create your custom distro ? > > It looks like some resources are missing in the distro. > > Regards > JB > > On 09/07/2017 10:06 AM, t...@quarendon.net wrote: > > Clearly this can't be true, since karaf ships with features containing > > bundles containing commands, but I can't get it to work. > > > > I've created a simple karaf shell command, following the tutorial in the > > documentation. As per the example, it just says "hello world". > > If I build that as an isolated bundle, install it into karaf and run it, it > > works, as per the documentation. > > > > What I'm trying to do though is build that into a feature to include into a > > custom karaf distribution. > > I can build the feature, and I can manually install the feature into a > > karaf distribution, and it works OK, I can run the resulting command. > > I can build the karaf distribution containing the feature OK, but when I > > then run the resulting karaf then fails to initialise properly. > > > > > > In the log file, I get: > > Adding features: > > Changes to perform: > > Region: root > > Bundles to install: > > ... > > mvn:org.apache.karaf.shell/org.apache.karaf.shell.console/4.1.2 > > null <-- This is clearly bad > > mvn:org.apache.karaf.shell/org.apache.karaf.shell.ssh/4.1.2 > > ... > > > > Installing bundles: > > ... > > null <-- This is the same null as before and causes the problem below. > > Error installing boot features > > java.lang.IllegalStateException: Resource has no uri > > at > > org.apache.karaf.features.internal.service.Deployer.getBundleInputStream(Deployer.java:1460) > > [10:org.apache.karaf.features.core:4.1.2] > > at > > org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:766) > > [10:org.apache.karaf.features.core:4.1.2] > > at > > org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1233) > > [10:org.apache.karaf.features.core:4.1.2] > > at > > org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$0(FeaturesServiceImpl.java:1132) > > [10:org.apache.karaf.features.core:4.1.2] > > at > > org.apache.karaf.features.internal.service.FeaturesServiceImpl$$Lambda$15/951619949.call(Unknown > > Source) [10:org.apache.karaf.features.core:4.1.2] > > at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?] > > at > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > > [?:?] > > at > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > > [?:?] > > > > The console then goes funny, and you get: > > Error in initialization script: etc\shell.init.script: String index out of > > range: 0 > > which I'm assuming is a knockon issue. > > > > I've got no idea where this "null" bundle is coming from, but it's clearly > > causing the karaf initialisation to go wrong. > > > > I have created an example project at > > https://github.com/tomq42/karaf-command-feature which I hope shows the > > problem. > > Just build with maven, then run the > > karaf-distro\target\assembly\bin\karaf(.bat) command. You should see the > > error above on the console, and the error in the log. > > > > Any insight would be very welcome. > > Thanks. > > > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com
Re: Creating a karaf feature containing a karaf shell command breaks karaf
> Thanks, I'm checking out and I will take a look (and eventually submit a PR > ;)). OK, thanks.
Re: Creating a karaf feature containing a karaf shell command breaks karaf
So although moving eventadmin out of startupFeatures makes the command work, it seems to break a bunch of other things, so doesn't seem wise afterall. E.g, I get things like the following that only seem to happen if I've moved that line: Bundle org.ops4j.pax.web.pax-web-extender-whiteboard [115] Error starting mvn:org.ops4j.pax.web/pax-web-extender-whiteboard/6.0.6 (org.osgi.framework.BundleException: Activator start error in bundle org.ops4j.pax.web.pax-web-extender-whiteboard [115].) org.osgi.framework.BundleException: Activator start error in bundle org.ops4j.pax.web.pax-web-extender-whiteboard [115]. at org.apache.felix.framework.Felix.activateBundle(Felix.java:2289) [?:?] at org.apache.felix.framework.Felix.startBundle(Felix.java:2145) [?:?] at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1372) [?:?] at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:308) [?:?] at java.lang.Thread.run(Thread.java:745) [?:?] Caused by: java.lang.IllegalStateException: HttpService must be implementing Pax-Web WebContainer! at org.ops4j.pax.web.extender.whiteboard.internal.ExtendedHttpServiceRuntime.serviceChanged(ExtendedHttpServiceRuntime.java:110) ~[?:?] at org.ops4j.pax.web.extender.whiteboard.internal.ExtendedHttpServiceRuntime.serviceChanged(ExtendedHttpServiceRuntime.java:44) ~[?:?] at org.ops4j.pax.web.extender.whiteboard.internal.util.tracker.ReplaceableService.bind(ReplaceableService.java:86) ~[?:?] at org.ops4j.pax.web.extender.whiteboard.internal.util.tracker.ReplaceableService$Customizer.addingService(ReplaceableService.java:105) ~[?:?] at org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:941) ~[?:?] at org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:870) ~[?:?] at org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:256) ~[?:?] at org.osgi.util.tracker.AbstractTracked.trackInitial(AbstractTracked.java:183) ~[?:?] at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:318) ~[?:?] at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:261) ~[?:?] at org.ops4j.pax.web.extender.whiteboard.internal.util.tracker.ReplaceableService.start(ReplaceableService.java:72) ~[?:?] at org.ops4j.pax.web.extender.whiteboard.internal.ExtendedHttpServiceRuntime.start(ExtendedHttpServiceRuntime.java:153) ~[?:?] at org.ops4j.pax.web.extender.whiteboard.internal.Activator.start(Activator.java:65) ~[?:?] at org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697) ~[?:?] at org.apache.felix.framework.Felix.activateBundle(Felix.java:2239) ~[?:?] ... 4 more
Config admin doesn't correctly store strings containing backslashes
Really? I find this somewhat hard to believe, but I'm fairly sure this is the case. Anyway. I'm using config admin to update a configuration from within a karaf command. One of the properties is a string that contains a filename. I'm on Windows. Filename therefore contains backslashes. I type in c:\work\tomq.keytab I have double checked with the debugger that what my Hashtable contains is a string with backslashes in, it's not that the backslashes are being removed by the shell command input reader. I call "update" on the configuration. My component correctly sees the filename with backslashes. The properties file gets written with something like: file = "c:\\work\\tomq.keytab" So so far so good. However, it I restart karaf, it seems to go wrong. So with karaf stopped, the file still contains the same value. I start karaf. My component is then activated with a string that has the backslashes reinterpreted as escapes. The component sees: c:work omq.keytab The properties file is also rewritten by something so that the backslashes have been reinterpreted in, well a way I don't understand. It now contains: file = "c:work\tomq.keytab" I don't know who is responsible here, whether this is all felix fileinstall, or karaf. I'm using karf 4.1.2. If it makes a difference, my component is written using declarative services and has something along the lines of: @interface Config { String file{}; } @Activate putlic void activate(Config config) {} I've double checked this a few times as I feel I must be seeing things, because surely this should work? Can anyone confirm my madness/sanity? Thanks.
Providing alternative config mechanism than felix.fileinstall/Preserving config changes on re-install
I'm trying to establish some alternative configuration behaviour than what felix-fileinstall gives me. I have written a very simple component that reads configuration files in from /etc and updates config admin with the information, much like fileinstall does. I can run this and it appears to work, however I still have the existing mechanism in that I'd like to remove. So I naively did the following: set the start-level of my bundle to be 11, same as fileinstall set felix.fileinstall.enableConfigSave to false in etc/custom.properties set felix.fileinstall.dir to empty Karaf fails to start. So my suspicion is that apache fileinstall is more centrally required than I'd hoped. Looking at the karaf code there are certainly a few places where it assumes a configuration contains a felix.fileinstall.filename property that names the file where the configuration is stored, and seems to directly read and write those files. This appears to mean that I wouldn't be able to substitute my own configuration storage backend, which is a shame (I'm actually confused what org.apache.karaf.config.core.ConfigRepository is actually doing here -- why does is write directly to the file, rather than just letting fileinstall do it, especially as it only seems to allow for ".cfg" and not ".config" files). There may be other reasons why karaf won't start though. Is it likely that I would substitute felix.fileinstall in this way? What I was actually trying to solve was what to do when a user uninstalls and reinstalls our karaf-based product, and attempting to preserve any configuration changes. What I had hoped to do was store any actually modified configuration properties in separate files (just the actual properties that were different from default or from the originals in the etc/*.cfg files), so that the original etc/*.cfg files would be replaced without difficulty, and the changed configuration changes would then be applied. So alternative question: How else can I achieve the same thing without making the users manually merge the configuration changes? Thanks.
Re: Providing alternative config mechanism than felix.fileinstall/Preserving config changes on re-install
I can see KARAF-418, but that's pretty old, and sounds like it was considered unnecessary? Is there anything else I can't find? I don't necessarily want to store things in a database, I just want different behaviour to normal, to provide my own implementation of something that listens to config changes and injects configuration on startup. And I can write that bit, what I can't do is substitute it in at a central enough level to replace fileinstall. I've made a little progress. I manually edited the "startup.properties" file and put my bundle in there at level 11. It got activated. So what I don't currently understand is a) where that file comes from (it's clearly generated as part of building my karaf distribution, it's not in source control) and b) what specifying the start-level in the feature.xml file does (since it doesn't appear to specify the start level :-)). My problem now appears to be that I'd written my code using declarative services, and I think I need to go back to old fashioned bundle activators and service trackers in order to reduce the dependencies and make the code work in the "simple" environment I encounter down at that start level. There was also a comprehension question of why the ConfigRepository was attempting to write the config files directly, rather than just calling Configuration.update. Surely one thing or the other (calling update I assume is preferable), but not both? Thanks. > On 06 October 2017 at 11:40 Jean-Baptiste Onofré wrote: > > > Hi > > I guess you want to use an alternative backend to the filesystem (a database > for instance). > > In that case we have a Jira about that and you can provide your own > persistence backend. > > Regards > JB > > On Oct 6, 2017, 12:30, at 12:30, t...@quarendon.net wrote: > >I'm trying to establish some alternative configuration behaviour than > >what felix-fileinstall gives me. > >I have written a very simple component that reads configuration files > >in from /etc and updates config admin with the information, much like > >fileinstall does. I can run this and it appears to work, however I > >still have the existing mechanism in that I'd like to remove. > > > >So I naively did the following: > > set the start-level of my bundle to be 11, same as fileinstall > >set felix.fileinstall.enableConfigSave to false in > >etc/custom.properties > > set felix.fileinstall.dir to empty > > > >Karaf fails to start. > > > >So my suspicion is that apache fileinstall is more centrally required > >than I'd hoped. Looking at the karaf code there are certainly a few > >places where it assumes a configuration contains a > >felix.fileinstall.filename property that names the file where the > >configuration is stored, and seems to directly read and write those > >files. This appears to mean that I wouldn't be able to substitute my > >own configuration storage backend, which is a shame (I'm actually > >confused what org.apache.karaf.config.core.ConfigRepository is actually > >doing here -- why does is write directly to the file, rather than just > >letting fileinstall do it, especially as it only seems to allow for > >".cfg" and not ".config" files). There may be other reasons why karaf > >won't start though. > > > >Is it likely that I would substitute felix.fileinstall in this way? > > > > > >What I was actually trying to solve was what to do when a user > >uninstalls and reinstalls our karaf-based product, and attempting to > >preserve any configuration changes. What I had hoped to do was store > >any actually modified configuration properties in separate files (just > >the actual properties that were different from default or from the > >originals in the etc/*.cfg files), so that the original etc/*.cfg files > >would be replaced without difficulty, and the changed configuration > >changes would then be applied. > > > >So alternative question: How else can I achieve the same thing without > >making the users manually merge the configuration changes? > > > >Thanks.
Re: Providing alternative config mechanism than felix.fileinstall/Preserving config changes on re-install
> You can implement your own PersistenceManager (ConfigAdmin service). OK, I'm actually super confused now (not hard). felix configadmin appears to have logic in it that persists configurations to and from files. It's unclear in the karaf environment where the FilePersistenceManager is attempting to read, and more importantly write, changes to/from. I can't see any evidence of it writing files anywhere, but the logic would appear to be to fall back to writing to the current directory if there isn't any explicit configuration (which there doesn't appear to be that I can find). Given the presence of felix configadmin and FilePersistenceManager, I can only assume that the reason that filinstall is *actually* the thing that is used in karaf for persistence of configuration is the polling behaviour, allowing you to change the files and have it pick the changes up without having to restart. Say I implemented my own felix configadmin PersistenceManager. I'd still need to get that activated very early, which is something I've not yet understood how to do. Any suggestions for how to get a bundle that's activated super early, same as configadmin/fileinstall?
Using java.util.logging
Documentation indicates I can do this. Pax logging indicates I can do this. Yet when I try, I can't. I have some code that uses java.util.logging. If I log at level SEVERE, it comes out on the console, and only the console, and doesn't come out in the karaf.log file. Anything less than that and it doesn't come out anywhere. This is the default java.util.logging behaviour isn't it? I'm starting using the karaf.bat file. As a double check I've printed out the value of the java.util.logging.config.file system property and it does indeed point to the etc\java.util.logging.properties. So I'm somewhat confused. Is there any additional configuration I need to do to make jdk logging come out somewhere useful? I'm using a 4.1.3 snapshot of karaf on Windows. Thanks.
Re: Using java.util.logging
> Yet when I try, I can't. OK, so that's frustrating. If I try a simple "hello world" bundle that just logs a message, run in a clean karaf, it DOES come out in the karaf.log file. Run it in our application though and it doesn't. So some bundle is interfering with the logging configuration in some way. That's going to be s easy to diagnose :-) Why is it that logging is so complicated, and that there are about a hundred logging frameworks? About half the projects on github must be dedicated to logging.
Re: Using java.util.logging
> can you check if JUL packages really comes from Pax Logging ? Err, how can they? Perhaps I'm misunderstanding. Surely the java.util.logging packages are provided by the JRE? I DO have one log message that DOES come out. Very odd. If I step through the code in the debugger, I appear to have two code paths. In one, my calls to java.util.logging.Logger.info end up in org.ops4j.pax.logging.log4j2.internal.JdkHandler. Later a call to java.util.logging.Logger.info ends up in org.ops4j.pax.logging.service.internal.JdkHandler These log calls end up being passed through to log4j, rather than log4j2. log4j isn't set up at all, only log4j2 is set up, so these messages don't come out. So for some reason, we have the "log4j" pax logging bundle ("OPS4J Pax Logging - Service", "org.ops4j.pax.logging.pax-logging-service") AS WELL as the log4j2 logging bundle ("OPS4J Pax Logging - Log4j v2", "org.ops4j.pax.logging.pax-logging-log4j2"). Not sure where that's come from, but that doesn't appear in the standard karaf distro. I don't believe we explicitly list that anywhere as a requirement so at the moment I'm not sure how I determine what has caused that to be included by the distribution build process.
Re: Using java.util.logging
> I don't believe we explicitly list that anywhere as a requirement Correction. We DID include this as a bundle in one of our features: mvn:org.ops4j.pax.logging/pax-logging-service/1.10.1 but it doesn't seem to be required, removing it doesn't introduce any resolution issues. Removing it makes all the logging work again. Marvellous! Thanks for the help :-)
Karaf freezes up under MS-Windows
I find the same issues happened in V4.1.6 anf v4.2.0 At MS-Windows 64-bit Windows 7.0 professional version I install a new fresh copy of Karaf v4.1.6 After it shows the following logo C:\software\apache-karaf-4.1.6\bin>karaf __ __ / //_/ __ _/ __/ / ,< / __ `/ ___/ __ `/ /_ / /| |/ /_/ / / / /_/ / __/ /_/ |_|\__,_/_/ \__,_/_/ Apache Karaf (4.1.6) Hit '' for a list of available commands and '[cmd] --help' for help on a specific command. Hit '' or type 'system:shutdown' or 'logout' to shutdown Karaf. karaf@root()> karaf@root()> list START LEVEL 100 , List Threshold: 50 I type the following command, Karaf seems to be freezed up withut any response. I need to kill Karaf process manually and restart karaf Karaf again. Same issue if I type the above command again. The same issue also happens in Karaf v4.2.0 Is it a bug? Best Rgds, Tom Leung
Re: Karaf freezes up under MS-Windows
No response no matter what I type including "CTRL-C and hit ENTER" I have type the following command karaf@root()> karaf@root()> bundle:list -t 0 START LEVEL 100 , List Threshold: 0 Same issue, still freeze up after typing "CTRL-C" and hit ENTER On Mon, Aug 13, 2018 at 12:44 PM, Jean-Baptiste Onofré wrote: > Hi Tom, > > By freeze, you mean that you can type any command anymore in the console ? > > Did you try CTRL-C or typing ENTER after the command ? > Does the same happen with bundle:list -t 0 command ? > > Nothing special in the karaf.log ? > > Regards > JB > > On 13/08/2018 06:38, tom leung wrote: > > I find the same issues happened in V4.1.6 anf v4.2.0 > > > > At MS-Windows 64-bit Windows 7.0 professional version > > > > > > I install a new fresh copy of Karaf v4.1.6 > > > > After it shows the following logo > > > > C:\software\apache-karaf-4.1.6\bin>karaf > > __ __ > >/ //_/ __ _/ __/ > > / ,< / __ `/ ___/ __ `/ /_ > > / /| |/ /_/ / / / /_/ / __/ > > /_/ |_|\__,_/_/ \__,_/_/ > > > > Apache Karaf (4.1.6) > > > > Hit '' for a list of available commands > > and '[cmd] --help' for help on a specific command. > > Hit '' or type 'system:shutdown' or 'logout' to shutdown Karaf. > > > > karaf@root()> > > > > karaf@root()> list > > START LEVEL 100 , List Threshold: 50 > > > > > > I type the following command, Karaf seems to be freezed up withut any > > response. > > > > I need to kill Karaf process manually and restart karaf Karaf again. > > > > Same issue if I type the above command again. > > > > The same issue also happens in Karaf v4.2.0 > > > > Is it a bug? > > > > Best Rgds, > > > > Tom Leung > > > > > > > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com >
Re: Karaf freezes up under MS-Windows
Based on the latest Apache Karaf v4.1.7, run it on Windows 7. Still freeze up at Windows side after entering the command "list - t 0" but the same symptom never happen at Linux side. So the bug is really related to MS-Windows. May you file this symptom as a bug. Best Rgds, Tom Leung On Mon, Aug 13, 2018 at 1:03 PM Jean-Baptiste Onofré wrote: > OK, let me try to reproduce (I think I still have a VM with Windows 7). > > Thanks, > Regards > JB > > On 13/08/2018 06:48, tom leung wrote: > > No response no matter what I type including "CTRL-C and hit ENTER" > > > > I have type the following command > > > > karaf@root()> > > karaf@root()> bundle:list -t 0 > > START LEVEL 100 , List Threshold: 0 > > > > Same issue, still freeze up after typing "CTRL-C" and hit ENTER > > > > > > > > > > > > On Mon, Aug 13, 2018 at 12:44 PM, Jean-Baptiste Onofré > <mailto:j...@nanthrax.net>> wrote: > > > > Hi Tom, > > > > By freeze, you mean that you can type any command anymore in the > > console ? > > > > Did you try CTRL-C or typing ENTER after the command ? > > Does the same happen with bundle:list -t 0 command ? > > > > Nothing special in the karaf.log ? > > > > Regards > > JB > > > > On 13/08/2018 06:38, tom leung wrote: > > > I find the same issues happened in V4.1.6 anf v4.2.0 > > > > > > At MS-Windows 64-bit Windows 7.0 professional version > > > > > > > > > I install a new fresh copy of Karaf v4.1.6 > > > > > > After it shows the following logo > > > > > > C:\software\apache-karaf-4.1.6\bin>karaf > > > __ __ > > >/ //_/ __ _/ __/ > > > / ,< / __ `/ ___/ __ `/ /_ > > > / /| |/ /_/ / / / /_/ / __/ > > > /_/ |_|\__,_/_/ \__,_/_/ > > > > > > Apache Karaf (4.1.6) > > > > > > Hit '' for a list of available commands > > > and '[cmd] --help' for help on a specific command. > > > Hit '' or type 'system:shutdown' or 'logout' to shutdown > > Karaf. > > > > > > karaf@root()> > > > > > > karaf@root()> list > > > START LEVEL 100 , List Threshold: 50 > > > > > > > > > I type the following command, Karaf seems to be freezed up withut > any > > > response. > > > > > > I need to kill Karaf process manually and restart karaf Karaf > again. > > > > > > Same issue if I type the above command again. > > > > > > The same issue also happens in Karaf v4.2.0 > > > > > > Is it a bug? > > > > > > Best Rgds, > > > > > > Tom Leung > > > > > > > > > > > > > -- > > Jean-Baptiste Onofré > > jbono...@apache.org <mailto:jbono...@apache.org> > > http://blog.nanthrax.net > > Talend - http://www.talend.com > > > > > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com >
Testing my custom distro with Cellar and Pax Eam
Hello folks I have a custom distro and as boot features I have: featuresBoot=aries-blueprint, bundle, cellar, config, cxf, deployer, diagnostic, feature, http-whiteboard, instance, jaas, kar, log, management, package, service, shell, spring, spring-web, ssh, system, wrap,exam,test-dependencies But when I run it in PAX Exam Cellar seems to get upset and complains about: java.lang.IllegalStateException: Configuration Admin service has been unregistered at org.apache.felix.cm.impl.ConfigurationAdminImpl.getConfigurationManager(ConfigurationAdminImpl.java:301)[3:org.apache.felix.configadmin:1.8.8] at org.apache.felix.cm.impl.ConfigurationAdminImpl.getConfiguration(ConfigurationAdminImpl.java:152)[3:org.apache.felix.configadmin:1.8.8] at org.apache.karaf.cellar.hazelcast.HazelcastGroupManager.init(HazelcastGroupManager.java:72)[89:org.apache.karaf.cellar.hazelcast:4.0.0 Anyone got any good ideas as to how to bootstrap Cellar. I looked at the Cellar ITests but they use a "manual" installation of Cellar not a boot time spin up. I also started the PAX Exam test instance and all the bundles seemed to be installed and running normally when I start it from ./bin/karaf. Thanks Tom
Re: Testing my custom distro with Cellar and Pax Eam
Thanks Achim I thought I shipped most of them I'll double check. Tom On Mon, Dec 14, 2015 at 7:17 PM, Achim Nierbeck wrote: > Hi, > > just recently I discovered that a custom distribution just using > featuresBoot is lacking certain libs, which a std. > Karaf distribution contains. [1] > Does your custom Distribution also contain those libraries? > > After that you should have a distribution comparable to a manual installed > Karaf. > > Regarding the ConfigurationAdmin Service, you'll might run into some sort > of race-condition. > In that case make sure your test also injects the Configuration Admin > Service and waits for it to show up. > > [1] - > https://github.com/apache/karaf/blob/master/assemblies/apache-karaf/pom.xml#L157-L201 > > 2015-12-14 15:19 GMT+01:00 Tom Barber : > >> Hello folks >> >> I have a custom distro and as boot features I have: >> >> featuresBoot=aries-blueprint, bundle, cellar, config, cxf, deployer, >> diagnostic, feature, http-whiteboard, instance, jaas, kar, log, management, >> package, service, shell, spring, spring-web, ssh, system, >> wrap,exam,test-dependencies >> >> But when I run it in PAX Exam Cellar seems to get upset and complains >> about: >> >> java.lang.IllegalStateException: Configuration Admin service has been >> unregistered >> at >> org.apache.felix.cm.impl.ConfigurationAdminImpl.getConfigurationManager(ConfigurationAdminImpl.java:301)[3:org.apache.felix.configadmin:1.8.8] >> at >> org.apache.felix.cm.impl.ConfigurationAdminImpl.getConfiguration(ConfigurationAdminImpl.java:152)[3:org.apache.felix.configadmin:1.8.8] >> at >> org.apache.karaf.cellar.hazelcast.HazelcastGroupManager.init(HazelcastGroupManager.java:72)[89:org.apache.karaf.cellar.hazelcast:4.0.0 >> >> Anyone got any good ideas as to how to bootstrap Cellar. >> >> I looked at the Cellar ITests but they use a "manual" installation of >> Cellar not a boot time spin up. >> >> I also started the PAX Exam test instance and all the bundles seemed to >> be installed and running normally when I start it from ./bin/karaf. >> >> Thanks >> >> Tom >> > > > > -- > > Apache Member > Apache Karaf <http://karaf.apache.org/> Committer & PMC > OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer & > Project Lead > blog <http://notizblog.nierbeck.de/> > Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS> > > Software Architect / Project Manager / Scrum Master > >
Re: Testing my custom distro with Cellar and Pax Eam
Hmm I don't think that works because the error occurs when trying to bootstrap Cellar not during the PAX Exam test. https://github.com/OSBI/meteorite-core/blob/master/meteorite-core-itests/src/test/java/bi/meteorite/core/security/TestSecurity.java#L146 I did attempt to block it up but the error already exists in the logs at this point. Weird how it works in manual mode. Tom On Mon, Dec 14, 2015 at 7:29 PM, Tom Barber wrote: > Thanks Achim I thought I shipped most of them I'll double check. > > Tom > > On Mon, Dec 14, 2015 at 7:17 PM, Achim Nierbeck > wrote: > >> Hi, >> >> just recently I discovered that a custom distribution just using >> featuresBoot is lacking certain libs, which a std. >> Karaf distribution contains. [1] >> Does your custom Distribution also contain those libraries? >> >> After that you should have a distribution comparable to a manual >> installed Karaf. >> >> Regarding the ConfigurationAdmin Service, you'll might run into some sort >> of race-condition. >> In that case make sure your test also injects the Configuration Admin >> Service and waits for it to show up. >> >> [1] - >> https://github.com/apache/karaf/blob/master/assemblies/apache-karaf/pom.xml#L157-L201 >> >> 2015-12-14 15:19 GMT+01:00 Tom Barber : >> >>> Hello folks >>> >>> I have a custom distro and as boot features I have: >>> >>> featuresBoot=aries-blueprint, bundle, cellar, config, cxf, deployer, >>> diagnostic, feature, http-whiteboard, instance, jaas, kar, log, management, >>> package, service, shell, spring, spring-web, ssh, system, >>> wrap,exam,test-dependencies >>> >>> But when I run it in PAX Exam Cellar seems to get upset and complains >>> about: >>> >>> java.lang.IllegalStateException: Configuration Admin service has been >>> unregistered >>> at >>> org.apache.felix.cm.impl.ConfigurationAdminImpl.getConfigurationManager(ConfigurationAdminImpl.java:301)[3:org.apache.felix.configadmin:1.8.8] >>> at >>> org.apache.felix.cm.impl.ConfigurationAdminImpl.getConfiguration(ConfigurationAdminImpl.java:152)[3:org.apache.felix.configadmin:1.8.8] >>> at >>> org.apache.karaf.cellar.hazelcast.HazelcastGroupManager.init(HazelcastGroupManager.java:72)[89:org.apache.karaf.cellar.hazelcast:4.0.0 >>> >>> Anyone got any good ideas as to how to bootstrap Cellar. >>> >>> I looked at the Cellar ITests but they use a "manual" installation of >>> Cellar not a boot time spin up. >>> >>> I also started the PAX Exam test instance and all the bundles seemed to >>> be installed and running normally when I start it from ./bin/karaf. >>> >>> Thanks >>> >>> Tom >>> >> >> >> >> -- >> >> Apache Member >> Apache Karaf <http://karaf.apache.org/> Committer & PMC >> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer >> & Project Lead >> blog <http://notizblog.nierbeck.de/> >> Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS> >> >> Software Architect / Project Manager / Scrum Master >> >> >
Re: Testing my custom distro with Cellar and Pax Eam
Transpires I was pulling in javax.inject which it was taking a deep disliking to. You live and learn. My one bug bear with OSGI is the amount of completely random or unrelated error messages that make debugging harder than it could be. Anyway, mystery solved. Thanks Tom On Mon, Dec 14, 2015 at 8:40 PM, Tom Barber wrote: > Hmm I don't think that works because the error occurs when trying to > bootstrap Cellar not during the PAX Exam test. > > > https://github.com/OSBI/meteorite-core/blob/master/meteorite-core-itests/src/test/java/bi/meteorite/core/security/TestSecurity.java#L146 > > I did attempt to block it up but the error already exists in the logs at > this point. > > Weird how it works in manual mode. > > Tom > > On Mon, Dec 14, 2015 at 7:29 PM, Tom Barber > wrote: > >> Thanks Achim I thought I shipped most of them I'll double check. >> >> Tom >> >> On Mon, Dec 14, 2015 at 7:17 PM, Achim Nierbeck >> wrote: >> >>> Hi, >>> >>> just recently I discovered that a custom distribution just using >>> featuresBoot is lacking certain libs, which a std. >>> Karaf distribution contains. [1] >>> Does your custom Distribution also contain those libraries? >>> >>> After that you should have a distribution comparable to a manual >>> installed Karaf. >>> >>> Regarding the ConfigurationAdmin Service, you'll might run into some >>> sort of race-condition. >>> In that case make sure your test also injects the Configuration Admin >>> Service and waits for it to show up. >>> >>> [1] - >>> https://github.com/apache/karaf/blob/master/assemblies/apache-karaf/pom.xml#L157-L201 >>> >>> 2015-12-14 15:19 GMT+01:00 Tom Barber : >>> >>>> Hello folks >>>> >>>> I have a custom distro and as boot features I have: >>>> >>>> featuresBoot=aries-blueprint, bundle, cellar, config, cxf, deployer, >>>> diagnostic, feature, http-whiteboard, instance, jaas, kar, log, management, >>>> package, service, shell, spring, spring-web, ssh, system, >>>> wrap,exam,test-dependencies >>>> >>>> But when I run it in PAX Exam Cellar seems to get upset and complains >>>> about: >>>> >>>> java.lang.IllegalStateException: Configuration Admin service has been >>>> unregistered >>>> at >>>> org.apache.felix.cm.impl.ConfigurationAdminImpl.getConfigurationManager(ConfigurationAdminImpl.java:301)[3:org.apache.felix.configadmin:1.8.8] >>>> at >>>> org.apache.felix.cm.impl.ConfigurationAdminImpl.getConfiguration(ConfigurationAdminImpl.java:152)[3:org.apache.felix.configadmin:1.8.8] >>>> at >>>> org.apache.karaf.cellar.hazelcast.HazelcastGroupManager.init(HazelcastGroupManager.java:72)[89:org.apache.karaf.cellar.hazelcast:4.0.0 >>>> >>>> Anyone got any good ideas as to how to bootstrap Cellar. >>>> >>>> I looked at the Cellar ITests but they use a "manual" installation of >>>> Cellar not a boot time spin up. >>>> >>>> I also started the PAX Exam test instance and all the bundles seemed to >>>> be installed and running normally when I start it from ./bin/karaf. >>>> >>>> Thanks >>>> >>>> Tom >>>> >>> >>> >>> >>> -- >>> >>> Apache Member >>> Apache Karaf <http://karaf.apache.org/> Committer & PMC >>> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer >>> & Project Lead >>> blog <http://notizblog.nierbeck.de/> >>> Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS> >>> >>> Software Architect / Project Manager / Scrum Master >>> >>> >> >
Re: Testing my custom distro with Cellar and Pax Eam
Yeah, as soon as I'd rebuild the test suite piece by piece it became very clear what the culprit was going to be. I wasn't complaining about anything specifically, there just seems to be a lot of "unspecific" errors in OSGI land that us newbies have to contend with, but I suspect a lot are caused by side effects of the dynamic nature of OSGI so there is probably not a great deal that can be done. Thanks for the help Achim. Tom On Tue, Dec 15, 2015 at 7:23 AM, Achim Nierbeck wrote: > sorry to hear you had a hard time, > but I think with Pax-Exam we have some sort of conflicting bundle in use > concerning Inject. > Actually it's more like Karaf is using the conflicting one, as the > official API bundle is already OSGi aware we shouldn't be in need of the > servicemix one anymore. > In that case the dependency=true flag on features helps a lot too :-) > > regards, Achim > > > 2015-12-15 0:48 GMT+01:00 Tom Barber : > >> Transpires I was pulling in javax.inject which it was taking a deep >> disliking to. >> >> You live and learn. My one bug bear with OSGI is the amount of completely >> random or unrelated error messages that make debugging harder than it could >> be. >> >> Anyway, mystery solved. >> >> Thanks >> >> Tom >> >> On Mon, Dec 14, 2015 at 8:40 PM, Tom Barber >> wrote: >> >>> Hmm I don't think that works because the error occurs when trying to >>> bootstrap Cellar not during the PAX Exam test. >>> >>> >>> https://github.com/OSBI/meteorite-core/blob/master/meteorite-core-itests/src/test/java/bi/meteorite/core/security/TestSecurity.java#L146 >>> >>> I did attempt to block it up but the error already exists in the logs at >>> this point. >>> >>> Weird how it works in manual mode. >>> >>> Tom >>> >>> On Mon, Dec 14, 2015 at 7:29 PM, Tom Barber >>> wrote: >>> >>>> Thanks Achim I thought I shipped most of them I'll double check. >>>> >>>> Tom >>>> >>>> On Mon, Dec 14, 2015 at 7:17 PM, Achim Nierbeck < >>>> bcanh...@googlemail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> just recently I discovered that a custom distribution just using >>>>> featuresBoot is lacking certain libs, which a std. >>>>> Karaf distribution contains. [1] >>>>> Does your custom Distribution also contain those libraries? >>>>> >>>>> After that you should have a distribution comparable to a manual >>>>> installed Karaf. >>>>> >>>>> Regarding the ConfigurationAdmin Service, you'll might run into some >>>>> sort of race-condition. >>>>> In that case make sure your test also injects the Configuration Admin >>>>> Service and waits for it to show up. >>>>> >>>>> [1] - >>>>> https://github.com/apache/karaf/blob/master/assemblies/apache-karaf/pom.xml#L157-L201 >>>>> >>>>> 2015-12-14 15:19 GMT+01:00 Tom Barber : >>>>> >>>>>> Hello folks >>>>>> >>>>>> I have a custom distro and as boot features I have: >>>>>> >>>>>> featuresBoot=aries-blueprint, bundle, cellar, config, cxf, deployer, >>>>>> diagnostic, feature, http-whiteboard, instance, jaas, kar, log, >>>>>> management, >>>>>> package, service, shell, spring, spring-web, ssh, system, >>>>>> wrap,exam,test-dependencies >>>>>> >>>>>> But when I run it in PAX Exam Cellar seems to get upset and complains >>>>>> about: >>>>>> >>>>>> java.lang.IllegalStateException: Configuration Admin service has been >>>>>> unregistered >>>>>> at >>>>>> org.apache.felix.cm.impl.ConfigurationAdminImpl.getConfigurationManager(ConfigurationAdminImpl.java:301)[3:org.apache.felix.configadmin:1.8.8] >>>>>> at >>>>>> org.apache.felix.cm.impl.ConfigurationAdminImpl.getConfiguration(ConfigurationAdminImpl.java:152)[3:org.apache.felix.configadmin:1.8.8] >>>>>> at >>>>>> org.apache.karaf.cellar.hazelcast.HazelcastGroupManager.init(HazelcastGroupManager.java:72)[89:org.apache.karaf.cellar.hazelcast:4.0.0 >>>>>> >>>>>> Anyone got any good ideas as to how to bootstrap Cellar. >>>>>> >>>>>> I looked at the Cellar ITests but they use a "manual" installation of >>>>>> Cellar not a boot time spin up. >>>>>> >>>>>> I also started the PAX Exam test instance and all the bundles seemed >>>>>> to be installed and running normally when I start it from ./bin/karaf. >>>>>> >>>>>> Thanks >>>>>> >>>>>> Tom >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> Apache Member >>>>> Apache Karaf <http://karaf.apache.org/> Committer & PMC >>>>> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> >>>>> Committer & Project Lead >>>>> blog <http://notizblog.nierbeck.de/> >>>>> Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS> >>>>> >>>>> Software Architect / Project Manager / Scrum Master >>>>> >>>>> >>>> >>> >> > > > -- > > Apache Member > Apache Karaf <http://karaf.apache.org/> Committer & PMC > OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer & > Project Lead > blog <http://notizblog.nierbeck.de/> > Co-Author of Apache Karaf Cookbook <http://bit.ly/1ps9rkS> > > Software Architect / Project Manager / Scrum Master > >
Updating Snapshot bundles from installedFeatures
Hello folks, I have a custom distro with an installedFeature (will later be a bootfeature), so it ships with the custom distro. Currently its 1.0-SNAPSHOT, and it exists in distro/system. After we push a build up to our build server its deployed to our maven repo, how do I get karaf to fetch the updated snapshot, because it already exists in distro/system it just uses the local one unless I manually delete it. Is there an alternative? Thanks Tom
Re: Updating Snapshot bundles from installedFeatures
Scratch that, found the global update policy thing on jira. Tom On Thu, Dec 17, 2015 at 4:39 PM, Tom Barber wrote: > Hello folks, > > I have a custom distro with an installedFeature (will later be a > bootfeature), so it ships with the custom distro. > > Currently its 1.0-SNAPSHOT, and it exists in distro/system. > > After we push a build up to our build server its deployed to our maven > repo, how do I get karaf to fetch the updated snapshot, because it already > exists in distro/system it just uses the local one unless I manually delete > it. Is there an alternative? > > Thanks > > Tom >
Re: Updating Snapshot bundles from installedFeatures
Hey Nick, https://issues.apache.org/jira/browse/KARAF-2183 found that and also a chap on IRC suggested it. Then bundle:update xxx works from the snapshot repo. Tom On Thu, Dec 17, 2015 at 4:51 PM, Nick Baker wrote: > Tom, we instruct our developers to delete the system/ directory when in > dev. Can you send a link to the case you found? > > Thanks, > Nick Baker > > From: Tom Barber > Reply-To: "user@karaf.apache.org" > Date: Thursday, December 17, 2015 at 11:50 AM > To: "user@karaf.apache.org" > Subject: Re: Updating Snapshot bundles from installedFeatures > > Scratch that, found the global update policy thing on jira. > > Tom > > On Thu, Dec 17, 2015 at 4:39 PM, Tom Barber > wrote: > >> Hello folks, >> >> I have a custom distro with an installedFeature (will later be a >> bootfeature), so it ships with the custom distro. >> >> Currently its 1.0-SNAPSHOT, and it exists in distro/system. >> >> After we push a build up to our build server its deployed to our maven >> repo, how do I get karaf to fetch the updated snapshot, because it already >> exists in distro/system it just uses the local one unless I manually delete >> it. Is there an alternative? >> >> Thanks >> >> Tom >> > >
Datasource not found
Hello folks i have a datasource define in etc/org.ops4j.datasource-users.cfg When I install my feature I have a persistence bundle that starts but I get: apache.aries.jpa.container - 1.0.2 | The DataSource osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=userlist) required by bundle bi.meteorite.persistence/1.0.0.SNAPSHOT could not be found. And my bundle hangs in grace period. If I then stop and start karaf all my bundles start. So how do I get it to find the datasource before trying to start my bundle with blueprint? Thanks Tom
Re: Datasource not found
4.0.2 and whatever the latest pax jdbc version is. On 20 Dec 2015 12:20 pm, "Christian Schneider" wrote: > What versions of karaf and pax-jdbc do you use? > > 2015-12-20 11:21 GMT+01:00 Tom Barber : > >> Hello folks >> >> i have a datasource define in etc/org.ops4j.datasource-users.cfg >> >> When I install my feature I have a persistence bundle that starts but I >> get: >> >> apache.aries.jpa.container - 1.0.2 | The DataSource >> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=userlist) >> required by bundle bi.meteorite.persistence/1.0.0.SNAPSHOT could not be >> found. >> >> And my bundle hangs in grace period. >> >> If I then stop and start karaf all my bundles start. >> >> So how do I get it to find the datasource before trying to start my >> bundle with blueprint? >> >> Thanks >> >> Tom >> > > > > -- > -- > Christian Schneider > http://www.liquid-reality.de > <https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46&URL=http%3a%2f%2fwww.liquid-reality.de> > > Open Source Architect > http://www.talend.com > <https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46&URL=http%3a%2f%2fwww.talend.com> >
Re: Datasource not found
Okay so I tried something. If I unzip my distro and start it up and run jdbc:ds-list the data source is listed. If I then install my persistence bundle it finds it. If I unzip my distro and start up my persistence bundle, it doesn't detect the datasource. Am I missing some bootstrap? Tom On Sun, Dec 20, 2015 at 3:52 PM, Jean-Baptiste Onofré wrote: > Hi Tomn > > what did you define in the cfg file ? > Anything special in the log ? > > I guess you use Karaf 4.0.2. Did you install the pax-jdbc feature: > > feature:install pax-jdbc > > ? > > I fixed that in next Karaf version: now the jdbc feature installs pax-jdbc > (it wasn't the case before). > > Regards > JB > > On 12/20/2015 11:21 AM, Tom Barber wrote: > >> Hello folks >> >> i have a datasource define in etc/org.ops4j.datasource-users.cfg >> >> When I install my feature I have a persistence bundle that starts but I >> get: >> >> apache.aries.jpa.container - 1.0.2 | The DataSource >> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name >> <http://osgi.jndi.service.name>=userlist) required by bundle >> bi.meteorite.persistence/1.0.0.SNAPSHOT could not be found. >> >> And my bundle hangs in grace period. >> >> If I then stop and start karaf all my bundles start. >> >> So how do I get it to find the datasource before trying to start my >> bundle with blueprint? >> >> Thanks >> >> Tom >> > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com >
Re: Datasource not found
Looks like I was missing pax-jdbc, although I did have 3 other pax jdbc related features. *facepalm* On Sun, Dec 20, 2015 at 5:37 PM, Jean-Baptiste Onofré wrote: > Can you check if pax-jdbc and pax-jdbc-config feature are installed (boot > features in your case I guess) ? > > Regards > JB > > On 12/20/2015 06:00 PM, Tom Barber wrote: > >> Okay so I tried something. >> >> If I unzip my distro and start it up and run jdbc:ds-list the data >> source is listed. If I then install my persistence bundle it finds it. >> >> If I unzip my distro and start up my persistence bundle, it doesn't >> detect the datasource. >> >> Am I missing some bootstrap? >> >> Tom >> >> >> On Sun, Dec 20, 2015 at 3:52 PM, Jean-Baptiste Onofré > <mailto:j...@nanthrax.net>> wrote: >> >> Hi Tomn >> >> what did you define in the cfg file ? >> Anything special in the log ? >> >> I guess you use Karaf 4.0.2. Did you install the pax-jdbc feature: >> >> feature:install pax-jdbc >> >> ? >> >> I fixed that in next Karaf version: now the jdbc feature installs >> pax-jdbc (it wasn't the case before). >> >> Regards >> JB >> >> On 12/20/2015 11:21 AM, Tom Barber wrote: >> >> Hello folks >> >> i have a datasource define in etc/org.ops4j.datasource-users.cfg >> >> When I install my feature I have a persistence bundle that >> starts but I get: >> >> apache.aries.jpa.container - 1.0.2 | The DataSource >> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name >> <http://osgi.jndi.service.name> >> <http://osgi.jndi.service.name>=userlist) required by bundle >> bi.meteorite.persistence/1.0.0.SNAPSHOT could not be found. >> >> And my bundle hangs in grace period. >> >> If I then stop and start karaf all my bundles start. >> >> So how do I get it to find the datasource before trying to start >> my >> bundle with blueprint? >> >> Thanks >> >> Tom >> >> >> -- >> Jean-Baptiste Onofré >> jbono...@apache.org <mailto:jbono...@apache.org> >> http://blog.nanthrax.net >> Talend - http://www.talend.com >> >> >> > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com >
Re: Datasource not found
Hmm yeah, although running my Karaf container in pax exam, reverts back to throwing the same error, even though running it manually doesn't. On Sun, Dec 20, 2015 at 9:42 PM, Jean-Baptiste Onofré wrote: > OK, it's what I thought. That's why I added the pax-jdbc feature > dependency in jdbc now. > > Regards > JB > > On 12/20/2015 09:29 PM, Tom Barber wrote: > >> Looks like I was missing pax-jdbc, although I did have 3 other pax jdbc >> related features. *facepalm* >> >> On Sun, Dec 20, 2015 at 5:37 PM, Jean-Baptiste Onofré > <mailto:j...@nanthrax.net>> wrote: >> >> Can you check if pax-jdbc and pax-jdbc-config feature are installed >> (boot features in your case I guess) ? >> >> Regards >> JB >> >> On 12/20/2015 06:00 PM, Tom Barber wrote: >> >> Okay so I tried something. >> >> If I unzip my distro and start it up and run jdbc:ds-list the data >> source is listed. If I then install my persistence bundle it >> finds it. >> >> If I unzip my distro and start up my persistence bundle, it >> doesn't >> detect the datasource. >> >> Am I missing some bootstrap? >> >> Tom >> >> >> On Sun, Dec 20, 2015 at 3:52 PM, Jean-Baptiste Onofré >> mailto:j...@nanthrax.net> >> <mailto:j...@nanthrax.net <mailto:j...@nanthrax.net>>> wrote: >> >> Hi Tomn >> >> what did you define in the cfg file ? >> Anything special in the log ? >> >> I guess you use Karaf 4.0.2. Did you install the pax-jdbc >> feature: >> >> feature:install pax-jdbc >> >> ? >> >> I fixed that in next Karaf version: now the jdbc feature >> installs >> pax-jdbc (it wasn't the case before). >> >> Regards >> JB >> >> On 12/20/2015 11:21 AM, Tom Barber wrote: >> >> Hello folks >> >> i have a datasource define in >> etc/org.ops4j.datasource-users.cfg >> >> When I install my feature I have a persistence bundle >> that >> starts but I get: >> >> apache.aries.jpa.container - 1.0.2 | The DataSource >> >> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name >> <http://osgi.jndi.service.name> >> <http://osgi.jndi.service.name> >> <http://osgi.jndi.service.name>=userlist) required by >> bundle >> bi.meteorite.persistence/1.0.0.SNAPSHOT could not be >> found. >> >> And my bundle hangs in grace period. >> >> If I then stop and start karaf all my bundles start. >> >> So how do I get it to find the datasource before trying >> to start my >> bundle with blueprint? >> >> Thanks >> >> Tom >> >> >> -- >> Jean-Baptiste Onofré >> jbono...@apache.org <mailto:jbono...@apache.org> >> <mailto:jbono...@apache.org <mailto:jbono...@apache.org>> >> http://blog.nanthrax.net >> Talend - http://www.talend.com >> >> >> >> -- >> Jean-Baptiste Onofré >> jbono...@apache.org <mailto:jbono...@apache.org> >> http://blog.nanthrax.net >> Talend - http://www.talend.com >> >> >> > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com >
Re: Datasource not found
Okay I give up, sometimes pax exam runs okay, sometimes it doesn't, and I can't work out how to get it to wait for the datasource to become available. I've tried: in blueprint and @Inject @Filter("(osgi.jdbc.driver.class=org.h2.Driver)") private DataSourceFactory dsf; in the test suite and neither consistently run the test suite. On Mon, Dec 21, 2015 at 12:07 PM, Tom Barber wrote: > Hmm yeah, although running my Karaf container in pax exam, reverts back to > throwing the same error, even though running it manually doesn't. > > On Sun, Dec 20, 2015 at 9:42 PM, Jean-Baptiste Onofré > wrote: > >> OK, it's what I thought. That's why I added the pax-jdbc feature >> dependency in jdbc now. >> >> Regards >> JB >> >> On 12/20/2015 09:29 PM, Tom Barber wrote: >> >>> Looks like I was missing pax-jdbc, although I did have 3 other pax jdbc >>> related features. *facepalm* >>> >>> On Sun, Dec 20, 2015 at 5:37 PM, Jean-Baptiste Onofré >> <mailto:j...@nanthrax.net>> wrote: >>> >>> Can you check if pax-jdbc and pax-jdbc-config feature are installed >>> (boot features in your case I guess) ? >>> >>> Regards >>> JB >>> >>> On 12/20/2015 06:00 PM, Tom Barber wrote: >>> >>> Okay so I tried something. >>> >>> If I unzip my distro and start it up and run jdbc:ds-list the >>> data >>> source is listed. If I then install my persistence bundle it >>> finds it. >>> >>> If I unzip my distro and start up my persistence bundle, it >>> doesn't >>> detect the datasource. >>> >>> Am I missing some bootstrap? >>> >>> Tom >>> >>> >>> On Sun, Dec 20, 2015 at 3:52 PM, Jean-Baptiste Onofré >>> mailto:j...@nanthrax.net> >>> <mailto:j...@nanthrax.net <mailto:j...@nanthrax.net>>> wrote: >>> >>> Hi Tomn >>> >>> what did you define in the cfg file ? >>> Anything special in the log ? >>> >>> I guess you use Karaf 4.0.2. Did you install the pax-jdbc >>> feature: >>> >>> feature:install pax-jdbc >>> >>> ? >>> >>> I fixed that in next Karaf version: now the jdbc feature >>> installs >>> pax-jdbc (it wasn't the case before). >>> >>> Regards >>> JB >>> >>> On 12/20/2015 11:21 AM, Tom Barber wrote: >>> >>> Hello folks >>> >>> i have a datasource define in >>> etc/org.ops4j.datasource-users.cfg >>> >>> When I install my feature I have a persistence bundle >>> that >>> starts but I get: >>> >>> apache.aries.jpa.container - 1.0.2 | The DataSource >>> >>> osgi:service/javax.sql.DataSource/(osgi.jndi.service.name >>> <http://osgi.jndi.service.name> >>> <http://osgi.jndi.service.name> >>> <http://osgi.jndi.service.name>=userlist) required by >>> bundle >>> bi.meteorite.persistence/1.0.0.SNAPSHOT could not be >>> found. >>> >>> And my bundle hangs in grace period. >>> >>> If I then stop and start karaf all my bundles start. >>> >>> So how do I get it to find the datasource before trying >>> to start my >>> bundle with blueprint? >>> >>> Thanks >>> >>> Tom >>> >>> >>> -- >>> Jean-Baptiste Onofré >>> jbono...@apache.org <mailto:jbono...@apache.org> >>> <mailto:jbono...@apache.org <mailto:jbono...@apache.org>> >>> http://blog.nanthrax.net >>> Talend - http://www.talend.com >>> >>> >>> >>> -- >>> Jean-Baptiste Onofré >>> jbono...@apache.org <mailto:jbono...@apache.org> >>> http://blog.nanthrax.net >>> Talend - http://www.talend.com >>> >>> >>> >> -- >> Jean-Baptiste Onofré >> jbono...@apache.org >> http://blog.nanthrax.net >> Talend - http://www.talend.com >> > >
Re: Datasource not found
So I have: https://github.com/OSBI/meteorite-core/blob/master/karaf/pom.xml#L176 in my custom bundle, and then when you start: feature:install meteorite-core-features and all my bundles go to active. In my itests: https://github.com/OSBI/meteorite-core/blob/master/meteorite-core-itests/src/test/java/bi/meteorite/util/ITestBootstrap.java#L112 install all my bundles then: https://github.com/OSBI/meteorite-core/blob/master/meteorite-core-itests/src/test/java/bi/meteorite/core/security/TestSecurity.java#L72 As far as I can see I'm literally trying to block up everything, yet still the data sources don't get resolved, some of the time, but some times they do. The filter timeouts look correct, am I missing anything else I can try? Thanks Tom On Mon, Dec 21, 2015 at 1:44 PM, Achim Nierbeck wrote: > if you want to have your datasource injected into your test, you can also > add a timeout on that @Filter annotation. > That might help already, especially since your datasource is managed > service. > > regards, Achim > > > 2015-12-21 14:29 GMT+01:00 Tom Barber : > >> Okay I give up, sometimes pax exam runs okay, sometimes it doesn't, and I >> can't work out how to get it to wait for the datasource to become available. >> >> I've tried: >> >> >filter="(osgi.jndi.service.name=jdbc/userlist)" >> availability="mandatory"/> >> in blueprint >> >> and >> >> @Inject >> @Filter("(osgi.jdbc.driver.class=org.h2.Driver)") >> private DataSourceFactory dsf; >> >> in the test suite and neither consistently run the test suite. >> >> >> >> On Mon, Dec 21, 2015 at 12:07 PM, Tom Barber >> wrote: >> >>> Hmm yeah, although running my Karaf container in pax exam, reverts back >>> to throwing the same error, even though running it manually doesn't. >>> >>> On Sun, Dec 20, 2015 at 9:42 PM, Jean-Baptiste Onofré >>> wrote: >>> >>>> OK, it's what I thought. That's why I added the pax-jdbc feature >>>> dependency in jdbc now. >>>> >>>> Regards >>>> JB >>>> >>>> On 12/20/2015 09:29 PM, Tom Barber wrote: >>>> >>>>> Looks like I was missing pax-jdbc, although I did have 3 other pax jdbc >>>>> related features. *facepalm* >>>>> >>>>> On Sun, Dec 20, 2015 at 5:37 PM, Jean-Baptiste Onofré >>>> <mailto:j...@nanthrax.net>> wrote: >>>>> >>>>> Can you check if pax-jdbc and pax-jdbc-config feature are installed >>>>> (boot features in your case I guess) ? >>>>> >>>>> Regards >>>>> JB >>>>> >>>>> On 12/20/2015 06:00 PM, Tom Barber wrote: >>>>> >>>>> Okay so I tried something. >>>>> >>>>> If I unzip my distro and start it up and run jdbc:ds-list the >>>>> data >>>>> source is listed. If I then install my persistence bundle it >>>>> finds it. >>>>> >>>>> If I unzip my distro and start up my persistence bundle, it >>>>> doesn't >>>>> detect the datasource. >>>>> >>>>> Am I missing some bootstrap? >>>>> >>>>> Tom >>>>> >>>>> >>>>> On Sun, Dec 20, 2015 at 3:52 PM, Jean-Baptiste Onofré >>>>> mailto:j...@nanthrax.net> >>>>> <mailto:j...@nanthrax.net <mailto:j...@nanthrax.net>>> wrote: >>>>> >>>>> Hi Tomn >>>>> >>>>> what did you define in the cfg file ? >>>>> Anything special in the log ? >>>>> >>>>> I guess you use Karaf 4.0.2. Did you install the pax-jdbc >>>>> feature: >>>>> >>>>> feature:install pax-jdbc >>>>> >>>>> ? >>>>> >>>>> I fixed that in next Karaf version: now the jdbc feature >>>>> installs >>>>> pax-jdbc (it wasn't the case before). >>>>> >>>>> Regards >>>>> JB >>>>> >>>>> On 12/20/2015 11:21 AM, Tom Barber wrote: >>>&g