Re: Problems With Factory Configurations In Karaf 3.0.1
Hello John, It looks like it is fixed on the 3.0.x branch: https://github.com/apache/karaf/blob/karaf-3.0.x/features/core/src/main/java/org/apache/karaf/features/internal/FeatureConfigInstaller.java Guillaume reverted the offending change on July 11: https://github.com/apache/karaf/commit/c31edca04692d8127989acbcdadc8157d1390f2a The same revert was done on master but Achim appears to have remade the change here: https://github.com/apache/karaf/commit/21089127c89ddae275d495607c422dda8ffec474 Achim - was that intentional? I am not sure if David was referring to me in some way when he said "there is a bunch of misinformation in some of the comments on this thread". I don't really care about the philosophical reasons for one particular factory pid format or the other. I just know with this change my ManagedServiceFactories and Declarative Service components could not find their factory configurations anymore...which thus made it impossible for me at least to upgrade to Karaf 3. Karaf is a great project but the surprise regressions and the long variable delays between versions (which include fixes for these regressions) is very frustrating. What makes it worse for me is that in the version I was using (Karaf 2.3.2) editing factory configurations (not stored in etc) from the Karaf command line console is broken causing me hours upon hours of misery as the workaround for it (copy the factory configuration to a file in the etc directory, edit the file, delete the old factory configuration, restart karaf) is very error prone (I can't ask people to use the web console because I need to use Pax Web 3 with Karaf 2.3.2). I keep being told how karaf is "too hard to use". Oh well... thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Problems-With-Factory-Configurations-In-Karaf-3-0-1-tp4033921p4035683.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Apache Karaf 3.0.2?
Hello JB, Achim, Thanks very much for the update! Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Apache-Karaf-3-0-2-tp4034950p4034971.html Sent from the Karaf - User mailing list archive at Nabble.com.
Apache Karaf 3.0.2?
Hello, Any updated plans for Karaf 3.0.2? I thought I remembered reading late July...but that is obviously not the case now. I ask because I would like to upgrade to 3.x but I can't because of KARAF-2950 and KARAF-1075. Of course KARAF-3100 would be a bonus :). And does log4j2 migration look harder than anticipated? If it does look like it will be a long time I will switch from log4j to logback instead of waiting for log4j2. Hope this doesn't come across with the wrong tone. I am just trying to plan when I could update. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Apache-Karaf-3-0-2-tp4034950.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Problems With Factory Configurations In Karaf 3.0.1
OK, added: https://issues.apache.org/jira/browse/KARAF-3100 and: https://github.com/apache/karaf/pull/43 to allow karaf to write feature configurations to file in the ${karaf.etc} dir. Let me know if this makes sense. thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Problems-With-Factory-Configurations-In-Karaf-3-0-1-tp4033921p4034002.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Problems With Factory Configurations In Karaf 3.0.1
Just to clarify here. I was considering implementing this feature (it doesn't look that hard). Just wondering whether it is something that would be accepted if the implementation is OK. thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Problems-With-Factory-Configurations-In-Karaf-3-0-1-tp4033921p4033983.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Problems With Factory Configurations In Karaf 3.0.1
Hi Jean-Baptiste, A question (sort of off-topic) - where do the variables in the configuration get replaced? Before this? By config admin? I ask because I am wondering how hard it would be to perhaps redirect this configuration to file (say via a configuration item in org.apache.karaf.features.cfg - perhaps called redirectFeatureConfigsToFile), would you accept it? Could it possibly make it for 3.0.2? This would solve so many problems for me (it allows me to keep all default configuration in the feature file...yet make the configuration easy to edit via configuration file in the etc dir). thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Problems-With-Factory-Configurations-In-Karaf-3-0-1-tp4033921p4033959.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Problems With Factory Configurations In Karaf 3.0.1
Hello JB, I am missing something here. In Karaf 2.3.2 when I added a configuration like this: this created a configuration like this: Pid:foo.bar.704f93e3-56f4-41bc-92d8-c6acf2ae167a FactoryPid: foo.bar BundleLocation: mvn:com.mycompany/foo.bar/1.0.0-SNAPSHOT Properties: host = 0.0.0.0 port = 8881 org.apache.karaf.features.configKey = foo.bar-unencrypted service.factoryPid = foo.bar service.pid = foo.bar.0cfabec4-15f4-4d33-bfa3-c9b4bc0dce66 suggesting that the factory pid should be the string before the "-" (if there is a "-"). Was this incorrect? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Problems-With-Factory-Configurations-In-Karaf-3-0-1-tp4033921p4033957.html Sent from the Karaf - User mailing list archive at Nabble.com.
RE: Problems With Factory Configurations In Karaf 3.0.1
This parse pid really needs to return three variables (possibly): private String[] parsePid(String pid) { int n = pid.indexOf('-'); if (n > 0) { String instance = pid.substring(n + 1); String factoryPid = pid.substring(0, n); return new String[]{null, factoryPid, instance}; // pid is null as it hasn't been generated yet } else { return new String[]{pid, null, null}; } } Of course if this change was made additional changes would need to be made to the file to match this. thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Problems-With-Factory-Configurations-In-Karaf-3-0-1-tp4033921p4033956.html Sent from the Karaf - User mailing list archive at Nabble.com.
RE: Problems With Factory Configurations In Karaf 3.0.1
Hello Benjamin, I think theproblem is that the names of the variables are incorrect/confusing. Look at the parsePid method in the same file private String[] parsePid(String pid) { int n = pid.indexOf('-'); if (n > 0) { String factoryPid = pid.substring(n + 1); pid = pid.substring(0, n); return new String[]{pid, factoryPid}; } else { return new String[]{pid, null}; } } The factory pid is definitely not the string after the "-" as suggested here. Gareth Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Problems-With-Factory-Configurations-In-Karaf-3-0-1-tp4033921p4033954.html Sent from the Karaf - User mailing list archive at Nabble.com.
RE: Problems With Factory Configurations In Karaf 3.0.1
Hello Benjamin, Thanks for doing the testing! I think I can see the relevant commit: https://github.com/apache/karaf/commit/b398c7690a09b02ad0f900fa4d6f51308f5970aa features/core/src/main/java/org/apache/karaf/features/internal/FeaturesServiceImpl.java protected Configuration createConfiguration(ConfigurationAdmin configurationAdmin, String pid, String factoryPid) throws IOException, InvalidSyntaxException { if (factoryPid != null) { -return configurationAdmin.createFactoryConfiguration(pid, null); +return configurationAdmin.createFactoryConfiguration(factoryPid, null); } else { return configurationAdmin.getConfiguration(pid, null); } If I understand the code in this file correctly, if I put this in my feature file: the pid variable here is "foo.bar" and the factoryPid variable is "unencrypted". Perhaps this change should be reverted? thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Problems-With-Factory-Configurations-In-Karaf-3-0-1-tp4033921p4033949.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Problems With Factory Configurations In Karaf 3.0.1
Hello Achim, So I can assume that what I am seeing is not the expected behaviour then? If yes, I will open a JIRA. Note that it messes up configuration for other components as well. e.g. This is what I see when I install hawtio-git which can't be correct: Pid:git.2bb62379-f85c-4846-bc6e-557ac591738b FactoryPid: git BundleLocation: null Properties: hawtio.config.dir = ./etc org.apache.karaf.features.configKey = hawtio-git service.factoryPid = git service.pid = git.2bb62379-f85c-4846-bc6e-557ac591738b thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Problems-With-Factory-Configurations-In-Karaf-3-0-1-tp4033921p4033944.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Problems With Factory Configurations In Karaf 3.0.1
Hello Achim, Thanks for the quick response! I am confused. I understood that if I created a file foo.bar-encrypted.cfg in the etc directory a configuration would be created for factory pid foo.bar. Similarly if I created a configuration like this in a feature file: . . I thought a configuration for factory pid foo.bar would also be created. Have I misunderstood something here? This is the behaviour I saw in karaf 2.3...which doesn't seem to be the case now (at least for adding the configuration for the feature files). Now it is thinking the part after the "-" is the factory pid. thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Problems-With-Factory-Configurations-In-Karaf-3-0-1-tp4033921p4033928.html Sent from the Karaf - User mailing list archive at Nabble.com.
Problems With Factory Configurations In Karaf 3.0.1
Hello, In my feature file I setup factory configurations like this: . . host=0.0.0.0 port=8081 host=0.0.0.0 port=8881 When I install this feature in Karaf 3.0.1, the configuration is appearing like this. I was expecting the factory pid to be foo.bar (as it was for Karaf 2.3.x): Pid:unencrypted.0ed76d9e-5a12-4b2d-bc7f-c02e6bce4b71 FactoryPid: unencrypted BundleLocation: null Properties: org.apache.karaf.features.configKey = foo.bar-unencrypted service.factoryPid = unencrypted service.pid = unencrypted.0ed76d9e-5a12-4b2d-bc7f-c02e6bce4b71 host = 0.0.0.0 port = 8081 Pid:encrypted.6b6f5e66-5fc2-479e-98f1-3873fe16be0d FactoryPid: encrypted BundleLocation: null Properties: org.apache.karaf.features.configKey = foo.bar-encrypted service.factoryPid = encrypted service.pid = encrypted.6b6f5e66-5fc2-479e-98f1-3873fe16be0d host = 0.0.0.0 port = 8881 So my question is - Am I supposed to specify factory configuration in a different way now for Karaf 3? If someone could let me know, it would be a big help. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Problems-With-Factory-Configurations-In-Karaf-3-0-1-tp4033921.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Anyone Seen This Declarative Services Error (Karaf 3)?
OK, I didn't see this: https://issues.apache.org/jira/browse/KARAF-2950 Looks like I need to wait for Karaf 3.0.2. thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Anyone-Seen-This-Declarative-Services-Error-Karaf-3-tp4033891p4033913.html Sent from the Karaf - User mailing list archive at Nabble.com.
Anyone Seen This Declarative Services Error (Karaf 3)?
Hello, I am installing the following base features when starting karaf 3.0.1: # # Comma separated list of features to install at startup # featuresBoot=config,standard,region,package,kar,obr,ssh,management,scr,war # # Defines if the boot features are started in asynchronous mode (in a dedicated thread) # featuresBootAsynchronous=false After first start, I seem to be getting a weird error when trying to run "scr:list". I am seeing the following in the log: 2014-06-30 22:41:29,330 | DEBUG | lixDispatchQueue | o.a.f.scr | []:[]:[]:[] | 67 - org.apache.felix.scr - 1.8.2 | FrameworkEvent ERROR - org.apache.felix.scr org.osgi.framework.ServiceException: Service cannot be cast: org.apache.felix.scr.impl.ScrGogoCommand at org.apache.felix.framework.ServiceRegistrationImpl.getFactoryUnchecked(ServiceRegistrationImpl.java:332) ~[na:na] at org.apache.felix.framework.ServiceRegistrationImpl.getService(ServiceRegistrationImpl.java:219) ~[na:na] at org.apache.felix.framework.ServiceRegistry.getService(ServiceRegistry.java:320) ~[na:na] at org.apache.felix.framework.Felix.getService(Felix.java:3568) ~[na:na] at org.apache.felix.framework.BundleContextImpl.getService(BundleContextImpl.java:468) ~[na:na] at org.apache.karaf.shell.console.completer.CommandsCompleter.unProxy(CommandsCompleter.java:298) ~[na:na] at org.apache.karaf.shell.console.completer.CommandsCompleter.checkData(CommandsCompleter.java:234) ~[na:na] at org.apache.karaf.shell.console.completer.CommandsCompleter.complete(CommandsCompleter.java:86) ~[na:na] at org.apache.karaf.shell.console.impl.jline.CompleterAsCompletor.complete(CompleterAsCompletor.java:32) ~[na:na] at jline.console.ConsoleReader.complete(ConsoleReader.java:3077) ~[na:na] at jline.console.ConsoleReader.readLine(ConsoleReader.java:2501) ~[na:na] at jline.console.ConsoleReader.readLine(ConsoleReader.java:2162) ~[na:na] at org.apache.karaf.shell.console.impl.jline.ConsoleImpl.readAndParseCommand(ConsoleImpl.java:280) ~[na:na] at org.apache.karaf.shell.console.impl.jline.ConsoleImpl.run(ConsoleImpl.java:207) ~[na:na] at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_51] at org.apache.karaf.shell.console.impl.jline.ConsoleFactoryService$3.doRun(ConsoleFactoryService.java:126) ~[na:na] at org.apache.karaf.shell.console.impl.jline.ConsoleFactoryService$3$1.run(ConsoleFactoryService.java:117) ~[na:na] at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_51] at org.apache.karaf.jaas.modules.JaasHelper.doAs(JaasHelper.java:47) ~[na:na] at org.apache.karaf.shell.console.impl.jline.ConsoleFactoryService$3.run(ConsoleFactoryService.java:115) ~[na:na] Anyone seen an error like this before? Just trying to figure out what I may have done wrong here. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Anyone-Seen-This-Declarative-Services-Error-Karaf-3-tp4033891.html Sent from the Karaf - User mailing list archive at Nabble.com.
Where Has The Blueprint Status Gone In Karaf 3.x?
Hello, I am sure I am missing something obvious here but...where has the blueprint status gone? I know there was a separate column in karaf 2.3: karaf@root> list -t 0 -s START LEVEL 100 , List Threshold: 0 ID State Blueprint Level Symbolic name . . [ 14] [Active ] [Created ] [ 25] org.apache.karaf.shell.console (2.3.2) [ 15] [Active ] [Created ] [ 28] org.apache.karaf.deployer.spring (2.3.2) [ 16] [Active ] [Created ] [ 28] org.apache.karaf.deployer.blueprint (2.3.2) [ 17] [Active ] [Created ] [ 30] org.apache.karaf.jaas.command (2.3.2) [ 18] [Active ] [Created ] [ 30] org.apache.karaf.jaas.modules (2.3.2) [ 19] [Active ] [Created ] [ 30] org.apache.karaf.shell.dev (2.3.2) How can I see the same information in karaf 3?: 23 | Active | 24 | 3.0.1 | org.apache.karaf.deployer.spring 24 | Active | 24 | 3.0.1 | org.apache.karaf.deployer.blueprint 40 | Active | 30 | 3.0.1 | org.apache.karaf.shell.console 41 | Active | 30 | 3.0.1 | org.apache.karaf.jaas.modules 58 | Active | 30 | 3.0.1 | org.apache.karaf.jaas.command thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Where-Has-The-Blueprint-Status-Gone-In-Karaf-3-x-tp4033890.html Sent from the Karaf - User mailing list archive at Nabble.com.
Registering My Own Connection Factory For The Karaf JMS Service
Hello, I currently register my own JMS connection factories as services. It would be nice if I could take advantage of the new JMS command line tools in karaf (especially since the activemq web console has had problems for a long time with equinox - ugh). I started playing around with the karaf jms tools and I see these service parameters registered: karaf@root()> ls javax.jms.ConnectionFactory [javax.jms.ConnectionFactory] - osgi.service.blueprint.compname = pooledConnectionFactory name = jms-default osgi.jndi.service.name = jms/jms-default service.id = 1167 Provided by : Bundle 168 To get my connection factory seen by the karaf jms tools, I assume I just need to make sure that I have set osgi.jndi.service.name to "jms/" in my service properties? Do these tools explicitly require a pooled/cached connection factory...or it doesn't matter? thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Registering-My-Own-Connection-Factory-For-The-Karaf-JMS-Service-tp4033861.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Require-Capabilties and Karaf 2.3
Hello Achim, OK, I understand my problem. It is because I am using equinox. Christian mentioned the issue here: http://www.mail-archive.com/users@felix.apache.org/msg15464.html And I understand Christian opened an issue about it (with information about a workaround as well - I haven't tried it yet): https://issues.apache.org/jira/browse/KARAF-3069 Thanks for your help! Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Require-Capabilties-and-Karaf-2-3-tp4033665p4033859.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Require-Capabilties and Karaf 2.3
Hello Achim, Is this the configuration? I see this in both the 2.3.3 and 2.3.5 files. thanks, Gareth org.osgi.framework.system.capabilities= \ ${eecap-${java.specification.version}}, \ service-reference;effective:=active;objectClass=org.osgi.service.packageadmin.PackageAdmin, \ service-reference;effective:=active;objectClass=org.osgi.service.startlevel.StartLevel, \ service-reference;effective:=active;objectClass=org.osgi.service.url.URLHandlers, \ ${services-${karaf.framework}} eecap-1.7= osgi.ee; osgi.ee="OSGi/Minimum"; version:List="1.0,1.1,1.2", \ osgi.ee; osgi.ee="JavaSE"; version:List="1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7" eecap-1.6= osgi.ee; osgi.ee="OSGi/Minimum"; version:List="1.0,1.1,1.2", \ osgi.ee; osgi.ee="JavaSE"; version:List="1.0,1.1,1.2,1.3,1.4,1.5,1.6" eecap-1.5= osgi.ee; osgi.ee="OSGi/Minimum"; version:List="1.0,1.1,1.2", \ osgi.ee; osgi.ee="JavaSE"; version:List="1.0,1.1,1.2,1.3,1.4,1.5" eecap-1.4= osgi.ee; osgi.ee="OSGi/Minimum"; version:List="1.0,1.1,1.2", \ osgi.ee; osgi.ee="JavaSE"; version:List="1.0,1.1,1.2,1.3,1.4" eecap-1.3= osgi.ee; osgi.ee="OSGi/Minimum"; version:List="1.0,1.1", \ osgi.ee; osgi.ee="JavaSE"; version:List="1.0,1.1,1.2,1.3" eecap-1.2= osgi.ee; osgi.ee="OSGi/Minimum"; version:List="1.0,1.1", \ osgi.ee; osgi.ee="JavaSE"; version:List="1.0,1.1,1.2" -- View this message in context: http://karaf.922171.n3.nabble.com/Require-Capabilties-and-Karaf-2-3-tp4033665p4033677.html Sent from the Karaf - User mailing list archive at Nabble.com.
Require-Capabilties and Karaf 2.3
Hello, I recently updated bndtools 2.3. in my build environment and it appears to have started adding the following to all my bundles: Require-Capability: osgi.ee;filter:="(&(osgi.ee=JavaSE)(version=1.7))" When I try and install in karaf 2.3.3 I am getting the following exception: Error executing command: Could not start bundle : The bundle " [138]" could not be resolved. Reason: Missing Constraint: Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.7))" Is this "Require-Capability" supported in karaf 2.3.x? If yes, any commands in 2.3 which would allow me to check these capabilities. Just wondering whether I should find a way to remove the "Require-Capability" from my bundle temporarilty (I plan to upgrade to karaf 3.x after 3.0.2 comes out). thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Require-Capabilties-and-Karaf-2-3-tp4033665.html Sent from the Karaf - User mailing list archive at Nabble.com.
The Right Way To Patch/Upgrade Bundles?
Hello, I often have to do minor patch releases/updates to an existing system. What is the best way to do this? I used features to do the initial install. Currently I am doing something like this. For each bundle that needs to be updated: (1) Find the bundle id: karaf:root> list -t 0 -s | grep (2) Update the bundle in place: karaf:root> update (3) Restart karaf (minor changes may not need this). Is there a better way? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/The-Right-Way-To-Patch-Upgrade-Bundles-tp4033327.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Bulk Delete Configurations
Thanks for all the answers! I actually tried something like this (the tee was to put the output to file - is there a better way?): karaf@root> config:list "()" | grep service.pid | exec tee /tmp/badconfigs.txt Then this to create the commands (there were some weird characters in the output): sed --expression="s/.*=/config:delete/g;s/...$//g" badconfigs.txt which I could then feed into karaf. One thing I noted - when I tried to add the "sed" expression to the karaf command line the execution became very slow (even the tee slowed down karaf a lot). I guess that is just a limitation of the implementation? Also is there any way to pipe the commands into karaf directly for execution without using a temporary file? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Bulk-Delete-Configurations-tp4033294p4033326.html Sent from the Karaf - User mailing list archive at Nabble.com.
Bulk Delete Configurations
Hello, Any way to bulk delete configurations? I realize I have created 3 instances of a factory pid which shouldn't be there. I know I can create a script to delete the pids. Just wondering whether there may be another way. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Bulk-Delete-Configurations-tp4033294.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: WABs and Web-ContextPath
Hello Scott, There are other ways to do this in OSGi/Karaf. But to use these other methods you need to use the http service or the pax web implementation of the whiteboard pattern to register servlet contexts/servlets/filters etc. One way to implement your requirement would be to create one bundle to manage the registering of resources with pax web. Each Servlet implementation bundle could have a special manifest flag/special file which identifies it as part of the application. The main bundle would then be a bundle listener which looks for this special file/manifest flag and when it finds it it looks inside the bundle and registers relevant resources with pax web under the main servlet context. I do this already and it works well for me. Let me know if this answers your question. regards, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/WABs-and-Web-ContextPath-tp4032881p4032948.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Karaf 3.0.0 Questions
Hello Jean-Baptiste, You are right. I was getting the URL wrong :). I only brought this up because I thought the new message was confusing. The karaf 2.3.3 message was clear to me: karaf@root> features:addurl mvn:org.apache.activemq/activemq-karaf/5.9.0/features/xml Error executing command: Unable to add repositories: URL [mvn:org.apache.activemq/activemq-karaf/5.9.0/features/xml] could not be resolved. The 3.0.0 error "Could not find artifact xxx in defaultlocal (file:/home//.m2/repository)" suggests that I don't have any remote repositories configured. I will take a look at the pax url code. thanks again, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Karaf-3-0-0-Questions-tp4030800p4030814.html Sent from the Karaf - User mailing list archive at Nabble.com.
Karaf 3.0.0 Questions
Hello, I would first like to say thankyou for releasing Karaf 3.0.0. Just in time for Christmas! Great work! Of course, as soon as you released it I decided to download and play with it. After my initial play, I have a few questions: (1) The first thing I tried to do was install activemq, and of course I got the URL wrong the first time (I also know now that I don't need to know the full URL for activemq as it is in the features.repo file). I mention this because the error message appeared confusing: karaf@root()> feature:repo-add mvn:org.apache.activemq/activemq-karaf/5.9.0/features/xml Adding feature url mvn:org.apache.activemq/activemq-karaf/5.9.0/features/xml Error executing command: Error resolving artifact org.apache.activemq:activemq-karaf:features:xml:5.9.0: Could not find artifact org.apache.activemq:activemq-karaf:features:xml:5.9.0 in defaultlocal (file:/home/gcollins/.m2/repository/) Is this error message coming from karaf or pax-url? When I saw this the first time I thought it meant that external repositories (i.e maven central) were not configured. Is it confusing to anyone else? (2) I understand that activemq uses hawtio now for the webconsole so I tried to install that: karaf@root()> feature:install hawtio Error executing command: Could not start bundle mvn:io.hawt/hawtio-karaf-terminal/1.2.1/war in feature(s) hawtio-karaf-terminal-1.2.1: The bundle "io.hawt.hawtio-karaf-terminal_1.2.1 [164]" could not be resolved. Reason: Missing Constraint: Import-Package: org.apache.karaf.shell.console.jline; version="[2.2.0,3.0.0)" Is this a known hawtio issue? If not, where should I open an issue for this? (3) Any plans to update to javax.annotations 1.2? I ask because this is a dependency for jersey 2 (I am mulling upgrading from jetty 1.x). Or is this something irrelevant to karaf (i.e. this is purely an app level responsibility)? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Karaf-3-0-0-Questions-tp4030800.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Uninstalling Declarative Services Bundles Which Require Configuration
OK, I assume it is a defect then. JIRA added: https://issues.apache.org/jira/browse/KARAF-2367 thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Uninstalling-Declarative-Services-Bundles-Which-Require-Configuration-tp4029022p4029091.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Updating Managed Service Factory Config Via "config:" commands?
Issue created (better late than never): https://issues.apache.org/jira/browse/KARAF-2366 thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Updating-Managed-Service-Factory-Config-Via-config-commands-tp4028322p4029090.html Sent from the Karaf - User mailing list archive at Nabble.com.
Uninstalling Declarative Services Bundles Which Require Configuration
Hello, I am seeing behaviour in karaf which I did not expect. I uninstalled a bundle which used declarative services...and reinstalled it from a different location (old location - maven repository in nexus, new location - local file system). When I ran config:list for the pid after doing this, it still showed that the config was bound to the old bundle, even though it was uninstalled (which of course meant the new bundle could not be activated using declarative services). This behaviour persisted even after a restart of karaf. To fix I had to remove and re-add the configuration. I shouldn't need to do this, should I? This is a bug? I am using karaf 2.3.1 with equinox. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Uninstalling-Declarative-Services-Bundles-Which-Require-Configuration-tp4029022.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Updating Managed Service Factory Config Via "config:" commands?
Hi Jean-Baptiste, Achim, Thanks very much for the responses! When I look at the help I see: OPTIONS -f, --use-file Configuration lookup using the filename instead of the pid which suggests this has something to do with the file already being there. Does this need an update? Anyway I tried to run this command. Result below: karaf@root> config:edit -f com.mycompany.myservice-healthcheck Error executing command: java.lang.NullPointerException 2013-03-26 21:10:08,231 | INFO | []:[] | Thread-8220 | Console | araf.shell.console.jline.Console 198 | 14 - org.apache.karaf.shell.console - 2.3.1 | Exception caught while executing command java.lang.NullPointerException at org.apache.karaf.shell.config.ConfigCommandSupport.findConfigurationByFileName(ConfigCommandSupport.java:115) at org.apache.karaf.shell.config.EditCommand.doExecute(EditCommand.java:50) at org.apache.karaf.shell.config.ConfigCommandSupport.doExecute(ConfigCommandSupport.java:68) at org.apache.karaf.shell.console.OsgiCommandSupport.execute(OsgiCommandSupport.java:38) at org.apache.felix.gogo.commands.basic.AbstractCommand.execute(AbstractCommand.java:35) at org.apache.felix.gogo.runtime.CommandProxy.execute(CommandProxy.java:78) at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:474) at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:400) at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108) at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:183) at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:120) at org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:89) at org.apache.karaf.shell.console.jline.Console.run(Console.java:174) at java.lang.Thread.run(Unknown Source)[:1.7.0_05] at org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$4.doRun(ShellFactoryImpl.java:144)[28:org.apache.karaf.shell.ssh:2.3.1] at org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$4$1.run(ShellFactoryImpl.java:135) at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_05] at javax.security.auth.Subject.doAs(Unknown Source)[:1.7.0_05] at org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$4.run(ShellFactoryImpl.java:133)[28:org.apache.karaf.shell.ssh:2.3.1] thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Updating-Managed-Service-Factory-Config-Via-config-commands-tp4028322p4028325.html Sent from the Karaf - User mailing list archive at Nabble.com.
Updating Managed Service Factory Config Via "config:" commands?
Hi, I am trying to update factory config via config: commands (Karaf 2.3.1). This config was created by an entry in a feature file thus a backing config file does not exist. i.e.: Pid:com.mycompany.myservice.327d4cd4-4704-4190-9e51-26f5f0b91435 FactoryPid: com.mycompany.myservice BundleLocation: mvn:com.mycompany/mycompany/1.0.0-SNAPSHOT Properties: myservice.host = 0.0.0.0 myservice.port = 8080 org.apache.karaf.features.configKey = com.mycompany.myservice-healthcheck service.factoryPid = com.mycompany.myservice service.pid = com.mycompany.myservice.327d4cd4-4704-4190-9e51-26f5f0b91435 If I try and edit this config. e.g.: config:edit com.mycompany.myservice-healthcheck config:propset myservice.port = 8081 config:update I get a new config: Pid:com.mycompany.myservice.8cbfea87-f66e-44b4-b94a-11fdb14c235f FactoryPid: com.mycompany.myservice BundleLocation: mvn:com.antennasoftware/gravity.clientAP.blueprint/1.0.0-SNAPSHOT Properties: service.pid = com.mycompany.myservice.8cbfea87-f66e-44b4-b94a-11fdb14c235f myservice.port = 8081 service.factoryPid = com.mycompany.myservice felix.fileinstall.filename = file:/home/karaf/apache-karaf-2.3.1/etc/com.mycompany.myservice-healthcheck.cfg If I go and specify the pid exactly: > config:edit com.mycompany.myservice.327d4cd4-4704-4190-9e51-26f5f0b91435 > config:propset myservice.port 8081 > config:update Nothing happens. Am I missing something here? BTW, if the factory configuration is backed by a file everything works fine. And if I update a non-factory not backed by a file, a file gets created after the update. If someone could let me know what I am doing wrong, it would be much appreciated. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Updating-Managed-Service-Factory-Config-Via-config-commands-tp4028322.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Logback and Apache Karaf
Hi Ryan, I am curious - How did you install it? thanks, Gareth On Thu, Mar 14, 2013 at 6:26 PM, Ryan Moquin [via Karaf] wrote: > FYI, I tried the pax-logging-logback library (not sure if it was 1.7.0 or > 1.7.1 or something), but it went crazy when I installed it. Literally just > about every bundle version of logback that was ever made ended up installed > and activated there were some other weird things. I ended up deciding > just to stick with what's there by default though I would like to use > logback somehow. > > > On Thu, Mar 14, 2013 at 11:16 AM, Guillaume Nodet <[hidden email]> wrote: >> >> Another drawback of switching to logback would be that the log:set command >> which allow changing the logger levels would not work anymore. >> >> >> On Thu, Mar 14, 2013 at 9:03 AM, Achim Nierbeck <[hidden email]> wrote: >>> >>> No not really, cause right now we rather use the ConfigurationAdmin >>> Service way of configuring the logger. >>> But I think you're able to configure it in the org.ops4j.pax.logging.cfg >>> file to use a logback config. >>> >>> regards, Achim >>> >>> >>> 2013/3/14 Gareth <[hidden email]> >>>> >>>> Hi, >>>> >>>> Any plans to make it easy to switch the Pax Logging (now 1.7.0) now >>>> bundled >>>> with Karaf to use Logback instead? e.g. like having a: >>>> >>>> karaf.loggingbackend = log4j/logback >>>> >>>> I ask because I was interested in using logback specific features (e.g. >>>> log4j 1.2 does not support Markers). >>>> >>>> thanks in advance, >>>> Gareth >>>> >>>> >>>> >>>> -- >>>> View this message in context: >>>> http://karaf.922171.n3.nabble.com/Logback-and-Apache-Karaf-tp4028183.html >>>> Sent from the Karaf - User mailing list archive at Nabble.com. >>> >>> >>> >>> >>> -- >>> >>> Apache Karaf <http://karaf.apache.org/> Committer & PMC >>> OPS4J Pax Web <http://wiki.ops4j.org/display/paxweb/Pax+Web/> Committer & >>> Project Lead >>> OPS4J Pax for Vaadin <http://team.ops4j.org/wiki/display/PAXVAADIN/Home> >>> Commiter & Project Lead >>> blog <http://notizblog.nierbeck.de/> >> >> >> >> >> -- >> >> Guillaume Nodet >> >> Red Hat, Open Source Integration >> >> Email: [hidden email] >> Web: http://fusesource.com >> Blog: http://gnodet.blogspot.com/ >> > > > > > If you reply to this email, your message will be added to the discussion > below: > http://karaf.922171.n3.nabble.com/Logback-and-Apache-Karaf-tp4028183p4028206.html > To unsubscribe from Logback and Apache Karaf, click here. > NAML -- View this message in context: http://karaf.922171.n3.nabble.com/Logback-and-Apache-Karaf-tp4028183p4028207.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Logback and Apache Karaf
Isn't it though a separate bundle (pax-logging-service vs. pax-logging-logback)? From looking at the code it looks like whilst it uses the same pid as the pax-logging-service, pax-logging-logback works a little differently (the org.ops4j.pax.logging configuration contains a pointer to the real logback configuration). I can do all this myself (download pax-logging-logback to the embedded repository, change startup properties to use pax-logging-logback instead of pax-logging-service, change the default org.ops4j.pax.logging.properties file and add a logback.xml). It would be nice though if karaf could make it easier (like it does with the equinox/felix switch). I am actually surprised that no-one else asked for this. Is logback not used much? thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Logback-and-Apache-Karaf-tp4028183p4028195.html Sent from the Karaf - User mailing list archive at Nabble.com.
Logback and Apache Karaf
Hi, Any plans to make it easy to switch the Pax Logging (now 1.7.0) now bundled with Karaf to use Logback instead? e.g. like having a: karaf.loggingbackend = log4j/logback I ask because I was interested in using logback specific features (e.g. log4j 1.2 does not support Markers). thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Logback-and-Apache-Karaf-tp4028183.html Sent from the Karaf - User mailing list archive at Nabble.com.
Why Does Spring JMS Depend On Spring Web?
Hi, A quick question - in looking at the Karaf feature files, I see that spring-jms depends on spring-web being installed. Any reason why this is the case? I don't see the connection. In a dev environment (in Eclipse, outside Karaf) I run quite happily without installing the Spring Web bundles. Just trying to understand. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Why-Does-Spring-JMS-Depend-On-Spring-Web-tp4028018.html Sent from the Karaf - User mailing list archive at Nabble.com.
Karaf 3.x Plans?
Hi, Are there plans to release Karaf 3.0 in the near future? I ask because I am interested in using a more current Pax Web release with Karaf (the Pax Web 1.1/Jetty 7 release is now a little old). I am curious - is Karaf 3.x just waiting for new releases of dependencies (e.g. Aries, XBean?)...or are there major features that still need to be implemented? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Karaf-3-x-Plans-tp4027587.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Gogo vs. Karaf Commands
Hi Mike, I wasn't suggesting actually forking and maintaining Karaf. I was just suggesting using GitHub as a collaboration platform to complete the implementation. It is much easier to work with than svn diffs :). Anyway, I have added the JIRA: https://issues.apache.org/jira/browse/KARAF-2121 thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Gogo-vs-Karaf-Commands-tp4027219p4027358.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Gogo vs. Karaf Commands
Hi Gokturk, I am interested in seeing this happen as well. Perhaps you can share what you did so far on github? Just make a fork of apache karaf then push your changes to your fork? thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Gogo-vs-Karaf-Commands-tp4027219p4027303.html Sent from the Karaf - User mailing list archive at Nabble.com.
Gogo vs. Karaf Commands
Hello, I am curious - What is the difference between a Gogo command and a Karaf command? I ask because I wrote for myself a couple of gogo commands. If I run vanilla OSGi (e.g. equinox) with gogo they run fine. The commands also show up if I run gogo "help". If I run these same gogo commands in karaf, they also appear to work. However, the commands don't show up in Karaf help and I cannot take advantage of other advanced karaf console features like autocomplete with these commands. Any reason why this would not work? Is there perhaps just an additional service property I can add which will allow me to take advantage of the additional Karaf console support? Having looked at a few Karaf console command examples, I see they all extend the Karaf class OsgiCommandSupport, which I prefer not to do as I would like to avoid having to explicitly depend on Karaf classes. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Gogo-vs-Karaf-Commands-tp4027219.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Questions About the 2.3.0 Web Console
Hello Jean-Baptiste, Thanks very much for the quick response! My apologies about mentioning the shutdown issue. I should have checked JIRA. I thought the web console problem may be a timing issue. However I tried stopping and starting the web console bundle, uninstalling/reinstalling the web console feature but I still haven't been able to get the web console plugins to work. I don't see anything very exciting in the logs. Anything else useful I can try? thanks again, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Questions-About-the-2-3-0-Web-Console-tp4026419p4026438.html Sent from the Karaf - User mailing list archive at Nabble.com.
Questions About the 2.3.0 Web Console
Hi, Thanks for all the hard work in releasing Karaf 2.3.0! Anyway, I am just starting to try out the new release and I am seeing a couple of quirks: (1) It appears to hang on shutdown. Not a big issue as I can always use ^C. (2) The webconsole appears to have a few problems: (a) The following pages come up blank - Admin, Features and Gogo. Do I need to do anything special to see these pages? (b) I am not seeing the Components plugin in the console. Do I need to do something special here? In the past I just needed to make sure that Felix Declarative Services is started before the webconsole. I am seeing this behaviour running on CentOS 6.2, JDK 1.7.05 using equinox and the following boot feature list: # # Comma separated list of features to install at startup # featuresBoot=config,ssh,management,scr,,obr,kar,war,webconsole thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Questions-About-the-2-3-0-Web-Console-tp4026419.html Sent from the Karaf - User mailing list archive at Nabble.com.
Can Fixes For These Issues Get Into The Next Release?
Hello, Can fixes for the following issues get into the next release (whether 2.2.10 or 2.3.0 or both)?: https://issues.apache.org/jira/browse/KARAF-1796 https://issues.apache.org/jira/browse/KARAF-1759 https://issues.apache.org/jira/browse/KARAF-1765 I have added suggested fixes for all these defects based on the 2.2.x stream (the fixes work for me). If there is something additional I can do to help make sure these fixes get in please let me know. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Can-Fixes-For-These-Issues-Get-Into-The-Next-Release-tp4026234.html Sent from the Karaf - User mailing list archive at Nabble.com.
Startup Race Conditions For FileInstall/Blueprint
Hello, I have some questions about FileInstall/Blueprint in apache karaf 2.2.9. I created for myself a custom karaf installation that by default puts a blueprint file into the deploy directory which is available on first startup. Unfortunately I found a small race condition with doing this. Sometimes, on first startup, fileinstall will try to process the file before blueprint is initialized so I see the following error: 2012-10-01 18:39:44,386 | ERROR | []:[] | pw-133-61/deploy | BlueprintDeploymentListener | rint.BlueprintDeploymentListener 65 | 12 - org.apache.karaf.deployer.blueprint - 2.2.8 | Unable to build blueprint application bundle java.net.MalformedURLException: unknown protocol: blueprint at java.net.URL.(URL.java:395)[:1.6.0_29] at java.net.URL.(URL.java:283)[:1.6.0_29] at java.net.URL.(URL.java:306)[:1.6.0_29] at org.apache.karaf.deployer.blueprint.BlueprintDeploymentListener.transform(BlueprintDeploymentListener.java:63)[12:org.apache.karaf.deployer.blueprint:2.2.8] at org.apache.felix.fileinstall.internal.DirectoryWatcher.transformArtifact(DirectoryWatcher.java:548)[6:org.apache.felix.fileinstall:3.2.4] at org.apache.felix.fileinstall.internal.DirectoryWatcher.process(DirectoryWatcher.java:467)[6:org.apache.felix.fileinstall:3.2.4] at org.apache.felix.fileinstall.internal.DirectoryWatcher.run(DirectoryWatcher.java:291)[6:org.apache.felix.fileinstall:3.2.4] To get around this issue in the past (apache karaf 2.2.5), I had setup fileinstall to delay checking the deploy directory: felix.fileinstall.poll= 3 felix.fileinstall.start.level = 80 felix.fileinstall.noInitialDelay = true Whilst this works fine on karaf 2.2.5, when I update to karaf 2.2.9 this causes me problems on karaf restarts. This configuration now causes my bundle to be started then stopped/started again on the first check of the deploy directory (fileinstall refreshes the bundle even though there has been no change). I suspect this new behaviour is caused by the following change in felix fileinstall 3.2.2: * [FELIX-3398] - Track new versions of artifact listener and optionally update all bundles when a new version is detected Given all this information, I have the following questions: (1) Am I doing anything wrong here? Is it bad form for me to put a file in the deploy directory before first karaf startup? I am putting it in the deploy directory rather than creating a bundle so I can mess with the blueprint file if needed. I put it in the deploy directory on initial start as this application is static - I am not going to be updating bundles after initial install. (2) Is it a bug for felix fileinstall to refresh the bundle even though nothing has changed...or should I just be accepting that it will always do this? I ask because, as my application is static and initializes a lot of resources, I hadn't planned to worry too much about clean start/stop/start (I only run this app in OSGi as it shares a lot of code with my real OSGi work). Any suggestions would be greatly appreciated. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Startup-Race-Conditions-For-FileInstall-Blueprint-tp4026233.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: More Questions (Mostly On Features)
Hello Jean-Baptiste, Thanks for the answers. Some more clarifications below: > Yes and are both supported in Karaf 2.2.x (we > already use it in Karaf itself, Cellar, etc). However, you should be > able to see the config (using config:list > "(service.pid=com.foo.bar-alias)") but not the config file in etc. To > have a config file in etc, you should use in the feature > (as Cellar does). OK, after multiple changes (some things needed, some things not - including installing felix scr before installing the webconsole), I do now see the configuration in the webconsole. Unfortunately I hit this outstanding cellar issue: https://issues.apache.org/jira/browse/KARAF-1492 So once I removed cellar I could get my application to read the configuration. A question - Would it be possible to not closely tie the hazelcast instance service so closely to cellar? It has value even without bringing in the rest of cellar. I created my own hazelcast service bundle for now to get around this. >> The advantage of doing it this way is that you can be sure all optional >> dependencies/fragments are resolved before starting any bundle (thus you >> don't have to worry about ordering the bundles). I have a patch...and I >> will >> open a JIRA unless someone thinks this is foolish. This solves for me >> problems with getting fragments resolved (which are started after their >> bundle hosts) > I gonna check that, it's "weird" as we already use fragment in features > without problem. How often you see this will depend on whether you use felix or equinox (perhaps equinox starts bundles more quickly?), though I haven't tested with felix recently because the 3.x version had problems which looked like they wouldn't be corrected until 4.x. In my specific case I had about 18 bundles + 3 fragments (some brought in via obr) started in one feature. There are other niggling timing issues with features I have as well. For example, say you define your featureBoot like this: featuresBoot=config,ssh,management,obr,kar,war, The "feature that depends on obr resolution" may not install correctly because obr may not have finished starting before attempting to install the "obr dependent feature". Perhaps there is no reasonable solution to a timing issue like this (it would be really nice if you could somehow make sure a previous feature is started before attempting to install the next one)...but it makes it harder to setup an automated install. >> > directory="/etc/myconfigfolder">mvn:com.mycompany/myproject/1.0.0/tar.gz/myconfig >> >> >> Is this a reasonable idea? > > Why not, even if it should be the responsability of a provisioning > layer, like Kalumet for instance. > Kalumet is able to manage config files (for instance) coming from any > kind of VFS (http:tar, zip, bzip2, cifs, etc). OK, thanks! I will add a feature request then. I see a lot of good work being done with projects like Apache ACE and newer projects like Kalumet. I will keep an eye on them. Thanks very much again for the answers! Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/More-Questions-Mostly-On-Features-tp4025892p4025910.html Sent from the Karaf - User mailing list archive at Nabble.com.
More Questions (Mostly On Features)
Hello, A few questions (this will be a long email): (1) I see in the documentation that you should be able to do this to do this in a feature file: myProperty = myValue where com.foo.bar is a factory pid. What version of karaf should this be available for? Should this work in 2.2.x (normal pids work - I don't see anything for these factory pids)? I see something in 2.3.x (I see something in the UI but nothing goes to the etc directory)...though I haven't got it to work yet (my bundle doesn't see it). Is there any requirement to have a metatype.xml for this to get it to work? (2) Should cellar 2.2.4 work with karaf 2.3.x? I am trying it out as I would like to play with an OSGi 4.3 environment. Would karaf 3.0.x and cellar trunk be better? Anyway, here is the exception I get when I try this: 2012-08-31 00:44:04,986 | ERROR | rint Extender: 1 | BlueprintContainerImpl | container.BlueprintContainerImpl 362 | 9 - org.apache.aries.blueprint.core - 1.0.0 | Unable to start blueprint container for bundle org.apache.karaf.cellar.features org.osgi.service.blueprint.container.ComponentDefinitionException: Unable to intialize bean synchronizer at org.apache.aries.blueprint.container.BeanRecipe.runBeanProcInit(BeanRecipe.java:710)[9:org.apache.aries.blueprint.core:1.0.0] at org.apache.aries.blueprint.container.BeanRecipe.internalCreate2(BeanRecipe.java:820)[9:org.apache.aries.blueprint.core:1.0.0] at org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:783)[9:org.apache.aries.blueprint.core:1.0.0] at org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecipe.java:79)[9:org.apache.aries.blueprint.core:1.0.0] at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)[:1.7.0_05] at java.util.concurrent.FutureTask.run(Unknown Source)[:1.7.0_05] . . Caused by: java.lang.NullPointerException at org.apache.karaf.cellar.features.FeaturesSynchronizer.pull(FeaturesSynchronizer.java:103)[101:org.apache.karaf.cellar.features:2.2.4] at org.apache.karaf.cellar.features.FeaturesSynchronizer.init(FeaturesSynchronizer.java:53)[101:org.apache.karaf.cellar.features:2.2.4] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.7.0_05] at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)[:1.7.0_05] at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)[:1.7.0_05] at java.lang.reflect.Method.invoke(Unknown Source)[:1.7.0_05] (3) It appears that features install bundles in the following way: install bundle start bundle install bundle start bundle . . Any reason why you don't do this?: install bundle install bundle install bundle . . start bundle start bundle start bundle The advantage of doing it this way is that you can be sure all optional dependencies/fragments are resolved before starting any bundle (thus you don't have to worry about ordering the bundles). I have a patch...and I will open a JIRA unless someone thinks this is foolish. This solves for me problems with getting fragments resolved (which are started after their bundle hosts) (4) Another idea for a feature for the feature file: Currently you can do this: mvn:org.apache.karaf/apache-karaf/2.2.5/xml/jettyconfig It would be nice if you could also lay down a tar to a directory as well (I have a whole host of config files to lay down): mvn:com.mycompany/myproject/1.0.0/tar.gz/myconfig Is this a reasonable idea? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/More-Questions-Mostly-On-Features-tp4025892.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Get environment variable
Thanks Scott! I was interested in this answer as well. This needs to be documented somewhere as it is an important feature. Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Get-environment-variable-tp4025807p4025832.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Features and OBR (was Re: A Basic OBR Question)
Hello Jean-Baptiste, I have added a patch to fix KARAF-1759. I also added a new defect (KARAF-1765) because I believe the shell should make installing optional dependencies optional (rather than always installing). As well the shell should be consistent with the OBR Web UI and the Karaf OBR resolver in not installing optional dependencies by default (I felt it was confusing). A patch has also been added for this issue. Both patches were tested against a karaf 2.2.9 with shell.obr and features.obr updated for the changes. If these patches look reasonable, if you could put them into your next release it would be a big help! I know you just released 2.2.9, but when do you think that might be? I am just kind of keen on the KARAF-1759 fix :). thanks again, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/A-Basic-OBR-Question-tp4025744p4025800.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Features and OBR (was Re: A Basic OBR Question)
Hello Jean-Baptiste, One more question: I am trying to build Karaf 2.2.x trunk (mvn clean install). It is failing when I get to the Karaf Client. I get the following error. Anything obvious I may have messed up which would cause this error? thanks in advance, Gareth [ERROR] Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:1.7:shade (default) on project org.apache.karaf.client: Error creating shaded jar: Invalid signature file digest for Manifest main attributes -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:1.7:shade (default) on project org.apache.karaf.client: Error creating shaded jar: Invalid signature file digest for Manifest main attributes at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196) at org.apache.maven.cli.MavenCli.main(MavenCli.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) Caused by: org.apache.maven.plugin.MojoExecutionException: Error creating shaded jar: Invalid signature file digest for Manifest main attributes at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:553) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 19 more Caused by: java.lang.SecurityException: Invalid signature file digest for Manifest main attributes at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:221) at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:176) at java.util.jar.JarVerifier.processEntry(JarVerifier.java:245) at java.util.jar.JarVerifier.update(JarVerifier.java:199) at java.util.jar.JarFile.initializeVerifier(JarFile.java:323) at java.util.jar.JarFile.getInputStream(JarFile.java:388) at org.apache.maven.plugins.shade.DefaultShader.shade(DefaultShader.java:133) at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:494) ... 21 more -- View this message in context: http://karaf.922171.n3.nabble.com/A-Basic-OBR-Question-tp4025744p4025792.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Features and OBR (was Re: A Basic OBR Question)
OK, created KARAF-1759 and KARAF-1760 for these issues. thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/A-Basic-OBR-Question-tp4025744p4025787.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Features and OBR (was Re: A Basic OBR Question)
Hello Jean-Baptiste, Sure thing. Should I be creating two JIRAs?: (1) One for global default behaviour configuration. (2) One for per-feature start behaviour configuration (which would require a change to the feature xml schema?). I am curious - When is the next release planned? Will there be another 2.2.x or will the next one be 2.3.x? I ask because I am almost tempted to fix (1) myself as it will really make obr much more useful for me (I have a lot of bundles to install). (2) I guess could only go into a 3+ release because of the schema change required. thanks again, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/A-Basic-OBR-Question-tp4025744p4025786.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Features and OBR (was Re: A Basic OBR Question)
OK, now I think I understand. Looking at ObrResolver.java and BundleInfo.java: public List resolve(Feature feature) throws Exception { . . for (BundleInfo bundleInfo : feature.getBundles()) { . . } . . for (Resource res : deploy) { << list of resolved resources from OBR . . if (info == null) { info = new BundleInfoImpl(res.getURI()); } bundles.add(info); } return bundles; } If I now go to BundleInfoImpl.java start is implicitly set to false by default (it should really be set explicitly) which explains why the dependent bundles are not starting: public class BundleInfoImpl implements BundleInfo { private int startLevel; private String location; private boolean start; private boolean dependency; . . . public BundleInfoImpl(String location) { this.location = location; } Would it be possible to add a default "start" and "start-level" options to the obr resolver so I can get these dependent bundles started? Or could this somehow be set in the feature file as configuration values for each feature (e.g. )? This would make the obr resolver far more useful. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/A-Basic-OBR-Question-tp4025744p4025782.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Features and OBR (was Re: A Basic OBR Question)
OK, I think I understand the resolution difference. From the features-obr.xml blueprint: file:$(karaf.base)/etc/org.apache.karaf.features.obr.cfg So I assume this configuration does not apply when you execute obr:deploy? Still trying to figure out how to auto-start these dependent bundles. If someone could point me to how to do that it would be a big help (I don't see the answer in the documentation or the forums). thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/A-Basic-OBR-Question-tp4025744p4025775.html Sent from the Karaf - User mailing list archive at Nabble.com.
Features and OBR (was Re: A Basic OBR Question)
Hello, I am trying instead to install bundles using the OBR feature resolver. I set up my own little feature file: http://karaf.apache.org/xmlns/features/v1.0.0";> mvn:com.mycompany/myfeature-bundle/1.0.0-SNAPSHOT This works great and it installs and retrieves, installs and resolves dependent bundles from my obr repositories. How do I get the dependent bundles to be automatically started through? As well myfeature-bundle depends on netty yet when I install this way osgi.cmpn does not get pulled in. Is there any difference installing this way instead of using obr:deploy (myfeature-bundle is in maven and the obr repository)? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/A-Basic-OBR-Question-tp4025744p4025773.html Sent from the Karaf - User mailing list archive at Nabble.com.
A Basic OBR Question
Hello, I have set myself up with the open source nexus and some OBR respositories with a lot of bundles from central...as well as some bundles that I have created. I am now going through and figuring out how I would now like to install my bundles using obr. My apologies if I am missing something obvious here, but I am having some trouble that I don't understand. From my obr repositories (and karaf 2.2.9) I am trying to install the netty NIO framework via obr and for some reason obr keeps pulling in osgi.cmpn. Here are org.jboss.netty requirements (in the Manifest all imports are optional): Requires: package:(&(package=com.google.protobuf)) package:(&(package=javax.activation)) package:(&(package=javax.net.ssl)) package:(&(package=javax.servlet)) package:(&(package=javax.servlet.http)) package:(&(package=org.apache.commons.logging)) package:(&(package=org.apache.log4j)) package:(&(package=org.jboss.logging)) package:(&(package=org.jboss.marshalling)) package:(&(package=org.jboss.netty.bootstrap)(version>=3.5.0)(!(version>=4.0.0))) package:(&(package=org.jboss.netty.handler.codec.frame)(version>=3.5.0)(!(version>=4.0.0))) package:(&(package=org.jboss.netty.handler.codec.http.websocketx)(version>=3.5.0)(!(version>=4.0.0))) package:(&(package=org.jboss.netty.handler.codec.oneone)(version>=3.5.0)(!(version>=4.0.0))) package:(&(package=org.jboss.netty.handler.codec.protobuf)(version>=3.5.0)(!(version>=4.0.0))) package:(&(package=org.jboss.netty.handler.codec.serialization)(version>=3.5.0)(!(version>=4.0.0))) package:(&(package=org.jboss.netty.handler.codec.string)(version>=3.5.0)(!(version>=4.0.0))) package:(&(package=org.jboss.netty.handler.logging)(version>=3.5.0)(!(version>=4.0.0))) package:(&(package=org.jboss.netty.handler.timeout)(version>=3.5.0)(!(version>=4.0.0))) package:(&(package=org.osgi.framework)(version>=1.5.0)(!(version>=2.0.0))) package:(&(package=org.osgi.service.log)(version>=1.3.0)(!(version>=2.0.0))) package:(&(package=org.osgi.util.tracker)(version>=1.4.0)(!(version>=2.0.0))) package:(&(package=org.slf4j)(version>=1.6.0)(!(version>=2.0.0))) package:(&(package=sun.misc)) The three osgi imports here are already exported by other bundles (org.osgi.framework and org.osgi.service.tracker are exported by the system bundle, org.osgi.service.log is exported by pax-logging). The netty packages it requires it also exports. What other reason would there be for the compendium API to be brought in here? Alternatively, is there any way to stop obr:deploy from deploying optional resources?: karaf@root> obr:deploy org.jboss.netty Target resource(s): --- The Netty Project (3.5.2.Final) Optional resource(s): - osgi.cmpn (4.2.0.200908310645) Any suggestions here would be very helpful. thanks in advance, Gareth Collins -- View this message in context: http://karaf.922171.n3.nabble.com/A-Basic-OBR-Question-tp4025744.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: org.osgi.service.jdbc and Karaf
Cristian, Achim, Thank you very much for the ideas! I will think about this some more. Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/org-osgi-service-jdbc-and-Karaf-tp4025573p4025603.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: org.osgi.service.jdbc and Karaf
Hi Christian, I saw that. It sort of doesn't meet my needs because I can't just do something like this: as I am required to have multiple versions of com.mysql.jdbc.jdbc2.optional.MysqlDataSource accessible (I need to install multiple mysql drivers to communicate with multiple mysql versions). Instead, rather than making the JDBC drivers stand-alone bundles, I am wrapping my JDBC drivers in DataSourceFactories which allow me to choose the correct data source factory via OSGi service parameters (which also means the mapping JDBC URL to JDBC data source is configurable rather than being defined by import-package). Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/org-osgi-service-jdbc-and-Karaf-tp4025573p4025580.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: org.osgi.service.jdbc and Karaf
Is it 5.0 only though? It is in the 4.2 javadoc: http://www.osgi.org/javadoc/r4v42/org/osgi/service/jdbc/DataSourceFactory.html and there is a JDBC Service Specification chapter in the R4.2 enterprise specification (the chapter version was not updated for R5 - it is still 1.0). I had a quick look at the Pax JDBC code. It doesn't appear to export this interface either. Perhaps it should (via the pax-jdbc module)? Anyway, I got around the problem (for now) by including the osgi.enterprise 4.2 jar. thanks again, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/org-osgi-service-jdbc-and-Karaf-tp4025573p4025577.html Sent from the Karaf - User mailing list archive at Nabble.com.
org.osgi.service.jdbc and Karaf
Hello, I created some data source factories around some mysql JDBC drivers. When I run in karaf 2.2.8 I couldn't find a bundle which exported the org.osgi.service.jdbc package (I assumed that perhaps aries transaction or jpa might). What is the right way to get this interface? Should I just be including osgi.enterprise...or should this be coming from somewhere else? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/org-osgi-service-jdbc-and-Karaf-tp4025573.html Sent from the Karaf - User mailing list archive at Nabble.com.
Karaf Cellar - Site Vs. Instance Configuration
Hello Jean-Baptiste (or anyone else on the Karaf team), >From my understanding, cellar will automatically sync configuration between machines. Is my understanding correct? I also see the configuration files I added to the etc directory (these are all config files for declarative service apps) are owned by bundle "Apache Karaf :: Cellar :: Config". Is that correct? If my understanding is correct, if I use Cellar what is the right way to do instance specific information (like IP addresses, instance ids etc.)? Should I be using files outside of configuration admin or is there another recommended way? While I am here, will Karaf 3 be coming out soon? No rush, just curious...:) thanks in advance, Gareth Collins -- View this message in context: http://karaf.922171.n3.nabble.com/Karaf-Cellar-Site-Vs-Instance-Configuration-tp3872157p3872157.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Wrong Execution Environment Names in Karaf 2.2.4?
Hello Jean-Baptiste, I second the request for JavaSE-1.6/JavaSE-1.7 execution environments. As well, I tried to add in the Equinox implementation of Declarative Services (unless I missed it, I didn't see a Karaf feature for Declarative Services). When I tried to install org.eclipse.equinox.util_1.0.200.v20100503.jar (required for installing equinox DS), it requested that execution environment "OSGi/Minimum-1.1". I can go add this manually, but does that make sense to add this execution environment to the Karaf list as well? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Wrong-Execution-Environment-Names-in-Karaf-2-2-4-tp3447875p3451930.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Add another Jetty Server / OSGI
Hello Chris, I think you are asking the same question I did: http://karaf.922171.n3.nabble.com/Assigning-Wars-To-Ports-And-Subdirectories-td3210710.html >From my understanding, we need to wait for Pax Web 2.0.0. regards, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Add-another-Jetty-Server-OSGI-tp3399173p3401856.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Adding Additional Cellar Map Configuration?
Added the JIRA here, including the other suggested changes to the cellar 2.2.3 trunk: https://issues.apache.org/jira/browse/KARAF-865 To explain further what I am trying to do on shutdown. Some of the data I will store to Hazelcast will be instance specific data (e.g. for example who is connected to the current karaf instance). This immediately becomes invalid if the karaf instance shuts down (as a user cannot be connected to a karaf instance that isn't running). Any users of this instance-specific data will, of course, need to check somehow whether this data is invalid (since the specific karaf instance may shutdown abnormally, by a "kill -9" or a power loss). However, it would be nice on a clean shutdown to be able to make sure all the specific instance data is cleared. One way to clear it is to clean it out is to delete invalid entries in Activator.stop. But to do this the Hazelcast instance would still need to be active at that time (which I am finding is not always the case). Does this make sense? What do you think? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Adding-Additional-Cellar-Map-Configuration-tp3305655p3339401.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Adding Additional Cellar Map Configuration?
Hello Ioannis, I believe this can be fixed by changing the HazelcastBundleListener to be a SynchronousBundleListener (I believe the error occurred because I accessed Hazelcast in my Activator.start()). Also I see the BundleEvent.STOPPING will remove the classloader. Will the bundle listener receive the STOPPING event before or after the Activator.stop is called (I would like to be able to execute a hazelcast action in Activator.stop)? If I run shutdown on karaf, it would be nice if it was possible to be able to execute a hazelcast command before cellar shuts down the hazelcast instance. When I try to do this now, it appears the cellar hazelcast bundle beats me to it (I get a hazelcast instance stopped error). Would you have any suggestion here? thanks in advance, Gareth On Wed, Sep 14, 2011 at 11:16 PM, Gareth Collins wrote: > Hello Ioannis, > > I tried my example again. I am still having a problem with the trunk > cellar version (which should include the fix for 842). I am not sure > the fix is completely there. The exception is different now (see > below). > > thanks, > Gareth > > java.lang.ClassNotFoundException: com.mytestcompany.PredTester > at > org.apache.karaf.cellar.core.utils.CombinedClassLoader.findClass(CombinedClassLoader.java:62) > at java.lang.ClassLoader.loadClass(ClassLoader.java:306) > at java.lang.ClassLoader.loadClass(ClassLoader.java:247) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:247) > at > com.hazelcast.nio.AbstractSerializer.classForName(AbstractSerializer.java:72) > at > com.hazelcast.nio.AbstractSerializer.classForName(AbstractSerializer.java:57) > at > com.hazelcast.nio.Serializer$DataSerializer.classForName(Serializer.java:83) > at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:93) > at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:69) > at > com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:105) > at > com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:135) > at com.hazelcast.nio.Serializer.readObject(Serializer.java:62) > at com.hazelcast.impl.ThreadContext.toObject(ThreadContext.java:113) > at com.hazelcast.nio.IOUtil.toObject(IOUtil.java:149) > at com.hazelcast.impl.Record.getValue(Record.java:143) > at > com.hazelcast.query.Predicates$GetExpressionImpl.doGetValue(Predicates.java:842) > at > com.hazelcast.query.Predicates$GetExpressionImpl.getValue(Predicates.java:836) > at > com.hazelcast.query.Predicates$EqualPredicate.apply(Predicates.java:450) > at com.hazelcast.query.PredicateBuilder.apply(PredicateBuilder.java:32) > at > com.hazelcast.impl.ConcurrentMapManager$QueryOperationHandler.createResultPairs(ConcurrentMapManager.java:2658) > at > com.hazelcast.impl.ConcurrentMapManager$QueryOperationHandler$QueryTask.run(ConcurrentMapManager.java:2627) > at > com.hazelcast.impl.executor.ParallelExecutorService$ParallelExecutorImpl$ExecutionSegment.run(ParallelExecutorService.java:179) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:680) > > On Wed, Sep 7, 2011 at 6:13 PM, iocanel [via Karaf] > wrote: >> The branch 2.2.x is now back to stable. >> >> On Wed, Sep 7, 2011 at 10:28 AM, Ioannis Canellos <[hidden email]> wrote: >>> >>> You will need to wait a bit before you try 842. The branch needs some more >>> fixes. >>> >>>> >>>> -- >>>> Ioannis Canellos >>>> http://iocanel.blogspot.com >>>> Apache Karaf Committer & PMC >>>> Apache ServiceMix Committer >>>> Apache Gora Committer >>>> >>>> >>>> >>>> >>>> >>> >>> >>> >>> -- >>> Ioannis Canellos >>> http://iocanel.blogspot.com >>> Apache Karaf Committer & PMC >>> Apache ServiceMix Committer >>> Apache Gora Committer >>> >>> >>> >>> >>> >> >> >> >> -- >> Ioannis Canellos >> http://iocanel.blogspot.com >> Apache Karaf Committer & PMC >> Apache ServiceMix Committer >> Apache Gora Committer >> >> >> >> >> >> Ioannis Canellos http://iocanel.blogspot.com >> >> >> If you reply to this email, your message will be added to the discussion >> below: >> http://karaf.922171.n3.nabble.com/Adding-Additional-Cellar-Map-Configuration-tp3305655p3318095.html >> To unsubscribe from Adding Additional Cellar Map Configuration?, click here. > -- View this message in context: http://karaf.922171.n3.nabble.com/Adding-Additional-Cellar-Map-Configuration-tp3305655p3337894.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Adding Additional Cellar Map Configuration?
Hello Ioannis, I tried my example again. I am still having a problem with the trunk cellar version (which should include the fix for 842). I am not sure the fix is completely there. The exception is different now (see below). thanks, Gareth java.lang.ClassNotFoundException: com.mytestcompany.PredTester at org.apache.karaf.cellar.core.utils.CombinedClassLoader.findClass(CombinedClassLoader.java:62) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at com.hazelcast.nio.AbstractSerializer.classForName(AbstractSerializer.java:72) at com.hazelcast.nio.AbstractSerializer.classForName(AbstractSerializer.java:57) at com.hazelcast.nio.Serializer$DataSerializer.classForName(Serializer.java:83) at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:93) at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:69) at com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:105) at com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:135) at com.hazelcast.nio.Serializer.readObject(Serializer.java:62) at com.hazelcast.impl.ThreadContext.toObject(ThreadContext.java:113) at com.hazelcast.nio.IOUtil.toObject(IOUtil.java:149) at com.hazelcast.impl.Record.getValue(Record.java:143) at com.hazelcast.query.Predicates$GetExpressionImpl.doGetValue(Predicates.java:842) at com.hazelcast.query.Predicates$GetExpressionImpl.getValue(Predicates.java:836) at com.hazelcast.query.Predicates$EqualPredicate.apply(Predicates.java:450) at com.hazelcast.query.PredicateBuilder.apply(PredicateBuilder.java:32) at com.hazelcast.impl.ConcurrentMapManager$QueryOperationHandler.createResultPairs(ConcurrentMapManager.java:2658) at com.hazelcast.impl.ConcurrentMapManager$QueryOperationHandler$QueryTask.run(ConcurrentMapManager.java:2627) at com.hazelcast.impl.executor.ParallelExecutorService$ParallelExecutorImpl$ExecutionSegment.run(ParallelExecutorService.java:179) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:680) On Wed, Sep 7, 2011 at 6:13 PM, iocanel [via Karaf] wrote: > The branch 2.2.x is now back to stable. > > On Wed, Sep 7, 2011 at 10:28 AM, Ioannis Canellos <[hidden email]> wrote: >> >> You will need to wait a bit before you try 842. The branch needs some more >> fixes. >> >>> >>> -- >>> Ioannis Canellos >>> http://iocanel.blogspot.com >>> Apache Karaf Committer & PMC >>> Apache ServiceMix Committer >>> Apache Gora Committer >>> >>> >>> >>> >>> >> >> >> >> -- >> Ioannis Canellos >> http://iocanel.blogspot.com >> Apache Karaf Committer & PMC >> Apache ServiceMix Committer >> Apache Gora Committer >> >> >> >> >> > > > > -- > Ioannis Canellos > http://iocanel.blogspot.com > Apache Karaf Committer & PMC > Apache ServiceMix Committer > Apache Gora Committer > > > > > > Ioannis Canellos http://iocanel.blogspot.com > > > If you reply to this email, your message will be added to the discussion > below: > http://karaf.922171.n3.nabble.com/Adding-Additional-Cellar-Map-Configuration-tp3305655p3318095.html > To unsubscribe from Adding Additional Cellar Map Configuration?, click here. -- View this message in context: http://karaf.922171.n3.nabble.com/Adding-Additional-Cellar-Map-Configuration-tp3305655p3337857.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: ActiveMq Listening on tcp6
Hello Geoffry, What you can put in the broker config comes from here: https://activemq.apache.org/schema/core/activemq-core-5.5.0.xsd To cut to the chase, I am attaching a broker blueprint file of mine which I know works. It is largely the default except for the additional transport connector and a network connector. Just remove the network connector (as you are not setting up a broker network yet) and change the IP address in the transport connector to reflect your producer IP and deploy to karaf. I am assuming the karaf instance which you will deploy to will have at least the activemq and activemq-blueprint features installed. Let me know if this works for you. regards, Gareth On Wed, Sep 14, 2011 at 4:38 PM, gcr [via Karaf] wrote: > Gareth, > > This is what I get from netstat -anp --tcp: > > tcp6 0 0 127.0.0.1:61616 :::* > LISTEN - > > It's that tcp6 business that concerns me. > > See below: > > On 14 September 2011 13:12, Gareth <[hidden email]> wrote: >> >> Hello Geoffry, >> >> On the machine with your broker, do you see the following in netstat?: >> >> tcp 0 0 ::::61616 :::* >> LISTEN >> tcp 0 0 :::127.0.0.1:61616 :::* >> LISTEN >> >> If yes, from the consumer machine, can you now run: >> >> telnet 61616 > > I'll have to install telnet first. >> >> and connect (to confirm the port is really accessible)? If yes, this is >> now >> an issue on your consumer. >> >> I am not understanding one thing though. How were you configuring your >> broker originally? I was expecting you to update an existing broker >> configuration file (whether it be outside OSGi, or defined inside OSGi as >> a >> spring-dm/blueprint file)...rather than creating a new one. > > I had a file called localhost-broker.xml that I created from the karaf > client command line. (I don't recall the exact command I ran.) It contained > a transport connector element similar to what you have prescribed. As I > thrashed around, trying to get this thing to work, I removed it because none > of the sketchy documentation I was referencing seemed to include it. >> >> regards, >> Gareth >> >> >> >> -- >> View this message in context: >> http://karaf.922171.n3.nabble.com/ActiveMq-Listening-on-tcp6-tp3334296p3337007.html >> Sent from the Karaf - User mailing list archive at Nabble.com. > > > > -- > Geoffry Roberts > > > > > If you reply to this email, your message will be added to the discussion > below: > http://karaf.922171.n3.nabble.com/ActiveMq-Listening-on-tcp6-tp3334296p3337079.html > To unsubscribe from ActiveMq Listening on tcp6, click here. -- View this message in context: http://karaf.922171.n3.nabble.com/ActiveMq-Listening-on-tcp6-tp3334296p3337343.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: ActiveMq Listening on tcp6
Hello Geoffry, On the machine with your broker, do you see the following in netstat?: tcp0 0 ::::61616 :::* LISTEN tcp0 0 :::127.0.0.1:61616 :::* LISTEN If yes, from the consumer machine, can you now run: telnet 61616 and connect (to confirm the port is really accessible)? If yes, this is now an issue on your consumer. I am not understanding one thing though. How were you configuring your broker originally? I was expecting you to update an existing broker configuration file (whether it be outside OSGi, or defined inside OSGi as a spring-dm/blueprint file)...rather than creating a new one. regards, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/ActiveMq-Listening-on-tcp6-tp3334296p3337007.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Inconsistent deployment behavior between bundle installer and feature installer
Peter, I am curious - Which osgi container are you using with Karaf (Felix or Equinox)? If you are using Felix, could you try with Equinox and see if you see the same behaviour? thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Inconsistent-deployment-behavior-between-bundle-installer-and-feature-installer-tp3322051p3336699.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: ActiveMq Listening on tcp6
Geoffry, The transport connectors configuration update is for your broker. Not for the broker clients (producer and consumer). regards, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/ActiveMq-Listening-on-tcp6-tp3334296p3336476.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: ActiveMq Listening on tcp6
Geoffry, Do this in your activemq blueprint config (or equivalent): >>> add this line >>> <transportConnector name="openwire remote" uri="tcp://<server ip>:61616"/> I believe the "tcp://localhost:61616" means it will only listen for local connections. You need to explicitly tell it to listen to 61616 on each interface IP address (I found this out a few days ago myself :)). regards, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/ActiveMq-Listening-on-tcp6-tp3334296p3334941.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar Commands Not Appearing In Karaf Console When Equinox Used
I should have checked JIRA. I see the commands in the trunk version. thanks again, Gareth On Sun, Sep 11, 2011 at 3:15 PM, Jean-Baptiste Onofré [via Karaf] wrote: > You are right Achim ;) > > Regards > JB > > On 09/11/2011 07:55 PM, Achim Nierbeck wrote: >> Hi Gareth, >> >> it's a known already fixed bug ;) >> >> https://issues.apache.org/jira/browse/KARAF-841 >> >> it will be available with cellar 2.2.3 >> >> Regards, Achim >> >> Am 11.09.2011 06:42, schrieb Gareth: >>> Hello, >>> >>> I am having a small problem running cellar with equinox. >>> >>> If I start karaf 2.2.3 fresh (felix) and do the following: >>> >>>> features:install webconsole >>>> features:addurl >>>> mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.2.2/xml/features >>>> features:install cellar >>> I see all the cellar commands appear in the karaf console (e.g. >>> cluster:config-list, cluster:config-proplist etc). >>> >>> If I do the same with a fresh karaf 2.2.3 equinox (I set the >>> karaf.framework >>> to equinox in config.properties), the cellar commands do not appear >>> (though >>> cellar appears to still be running as my two Karaf nodes find each >>> other). >>> >>> Anything obvious I missed that would cause the cellar commands not to >>> appear >>> in the console? Anything obvious I can check to help debug this? >>> >>> thanks in advance, >>> Gareth >>> >>> -- >>> View this message in context: >>> >>> http://karaf.922171.n3.nabble.com/Cellar-Commands-Not-Appearing-In-Karaf-Console-When-Equinox-Used-tp3326416p3326416.html >>> >>> Sent from the Karaf - User mailing list archive at Nabble.com. >> >> > -- > Jean-Baptiste Onofré > [hidden email] > http://blog.nanthrax.net > Talend - http://www.talend.com > > > > If you reply to this email, your message will be added to the discussion > below: > http://karaf.922171.n3.nabble.com/Cellar-Commands-Not-Appearing-In-Karaf-Console-When-Equinox-Used-tp3326416p3327471.html > To unsubscribe from Cellar Commands Not Appearing In Karaf Console When > Equinox Used, click here. -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-Commands-Not-Appearing-In-Karaf-Console-When-Equinox-Used-tp3326416p3329766.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Adding Additional Cellar Map Configuration?
Hello Ioannis, I see it. Here is the change for the Spring exception: > svn diff > hazelcast/src/main/java/org/apache/karaf/cellar/hazelcast/HazelcastClusterManager.java Index: hazelcast/src/main/java/org/apache/karaf/cellar/hazelcast/HazelcastClusterManager.java === --- hazelcast/src/main/java/org/apache/karaf/cellar/hazelcast/HazelcastClusterManager.java (revision 1169574) +++ hazelcast/src/main/java/org/apache/karaf/cellar/hazelcast/HazelcastClusterManager.java (working copy) @@ -181,4 +181,12 @@ public void setConfigurationAdmin(ConfigurationAdmin configurationAdmin) { this.configurationAdmin = configurationAdmin; } + +public CombinedClassLoader getCombinedClassManager() { + return combinedClassLoader; +} + +public void setCombinedClassLoader(CombinedClassLoader combinedClassLoader) { +this.combinedClassLoader = combinedClassLoader; +} } thanks, Gareth On Sun, Sep 11, 2011 at 9:10 PM, Gareth Collins wrote: > Hello Ioannis, > > It now builds fine, but one change was needed to get the feature to > install (adding dosgi): > > svn diff assembly/src/main/resources/features.xml > Index: assembly/src/main/resources/features.xml > === > --- assembly/src/main/resources/features.xml (revision 1169574) > +++ assembly/src/main/resources/features.xml (working copy) > @@ -36,6 +36,7 @@ > > mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.hazelcast/${project.version} > > mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.config/${project.version} > > mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.features/${project.version} > + > mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.dosgi/${project.version} > > mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.bundle/${project.version} > > mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.utils/${project.version} > > mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.shell/${project.version} > > When I did get the feature installed, I am seeing the following > exception. Is there a typo somewhere in a spring xml file?: > > Exception in thread "SpringOsgiExtenderThread-2" > org.springframework.beans.factory.BeanCreationException: Error > creating bean with name 'clusterManager' defined in URL > [bundleentry://174.fwk1464447632/META-INF/spring/beans.xml]: Error > setting property values; nested exception is > org.springframework.beans.NotWritablePropertyException: Invalid > property 'combinedClassLoader' of bean class > [org.apache.karaf.cellar.hazelcast.HazelcastClusterManager]: Bean > property 'combinedClassLoader' is not writable or has an invalid > setter method. Does the parameter type of the setter match the return > type of the getter? > at > org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1361) > at > org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1086) > at > org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517) > at > org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) > at > org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293) > at > org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) > at > org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290) > at > org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192) > at > org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585) > at > org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) > at > org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.access$1600(AbstractDelegatedExecutionApplicationContext.java:69) > at > org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext$4.run(AbstractDelegatedExecutionApplicationContext.java:355) > at > org.springframework.osgi.util.internal.Pr
Re: Adding Additional Cellar Map Configuration?
Hello Ioannis, It now builds fine, but one change was needed to get the feature to install (adding dosgi): svn diff assembly/src/main/resources/features.xml Index: assembly/src/main/resources/features.xml === --- assembly/src/main/resources/features.xml(revision 1169574) +++ assembly/src/main/resources/features.xml(working copy) @@ -36,6 +36,7 @@ mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.hazelcast/${project.version} mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.config/${project.version} mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.features/${project.version} + mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.dosgi/${project.version} mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.bundle/${project.version} mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.utils/${project.version} mvn:org.apache.karaf.cellar/org.apache.karaf.cellar.shell/${project.version} When I did get the feature installed, I am seeing the following exception. Is there a typo somewhere in a spring xml file?: Exception in thread "SpringOsgiExtenderThread-2" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'clusterManager' defined in URL [bundleentry://174.fwk1464447632/META-INF/spring/beans.xml]: Error setting property values; nested exception is org.springframework.beans.NotWritablePropertyException: Invalid property 'combinedClassLoader' of bean class [org.apache.karaf.cellar.hazelcast.HazelcastClusterManager]: Bean property 'combinedClassLoader' is not writable or has an invalid setter method. Does the parameter type of the setter match the return type of the getter? at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1361) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1086) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.access$1600(AbstractDelegatedExecutionApplicationContext.java:69) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext$4.run(AbstractDelegatedExecutionApplicationContext.java:355) at org.springframework.osgi.util.internal.PrivilegedUtils.executeWithCustomTCCL(PrivilegedUtils.java:85) at org.springframework.osgi.context.support.AbstractDelegatedExecutionApplicationContext.completeRefresh(AbstractDelegatedExecutionApplicationContext.java:320) at org.springframework.osgi.extender.internal.dependencies.startup.DependencyWaiterApplicationContextExecutor$CompleteRefreshTask.run(DependencyWaiterApplicationContextExecutor.java:132) at java.lang.Thread.run(Thread.java:680) Caused by: org.springframework.beans.NotWritablePropertyException: Invalid property 'combinedClassLoader' of bean class [org.apache.karaf.cellar.hazelcast.HazelcastClusterManager]: Bean property 'combinedClassLoader' is not writable or has an invalid setter method. Does the parameter type of the setter match the return type of the getter? at org.springframework.beans.BeanWrapperImpl.setPropertyValue(BeanWrapperImpl.java:1052) at org.springframework.beans.BeanWrapperImpl.setPropertyValue(BeanWrapperImpl.java:921) at org.springframework.beans.AbstractPropertyAccessor.setPropertyValues(AbstractPropertyAccessor.java:76) at org.springframework.beans.AbstractPropertyAccessor.setPropertyValues(AbstractPropertyAccessor.java:58) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1358) ... 15 more thanks in
Cellar Commands Not Appearing In Karaf Console When Equinox Used
Hello, I am having a small problem running cellar with equinox. If I start karaf 2.2.3 fresh (felix) and do the following: > features:install webconsole > features:addurl > mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.2.2/xml/features > features:install cellar I see all the cellar commands appear in the karaf console (e.g. cluster:config-list, cluster:config-proplist etc). If I do the same with a fresh karaf 2.2.3 equinox (I set the karaf.framework to equinox in config.properties), the cellar commands do not appear (though cellar appears to still be running as my two Karaf nodes find each other). Anything obvious I missed that would cause the cellar commands not to appear in the console? Anything obvious I can check to help debug this? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-Commands-Not-Appearing-In-Karaf-Console-When-Equinox-Used-tp3326416p3326416.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: A Weird Issue
All, I did some more testing, finding even more issues with felix classloading. In a certain scenario I was even having problems loading a wab's classes (some loaded correctly and some ClassNotFoundException). I rebuilt my karaf environment using equinox, and all classes loaded correctly, so I guess this problem is with Felix. In looking at the Felix JIRA issues I see many related to classloading (many to be fixed in Felix 4.0.0). I see Karaf is using an old version of Felix (3.0.9 - current is 3.2.2). Is there a plan to upgrade to upgrade Karaf for a more current Felix version (e.g. Felix 4.0.0 for Karaf 3.0.0)? For now it looks like I will stick with equinox. I assume equinox will continue to be supported moving forward? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/A-Weird-Issue-tp3324424p3326033.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Is the vm:// prorocol Supported in Karaf?
Hello Geoffry Are you sure that vm URL is right? Shouldn't it just be: vm://localhost (not vm://localhost:61616) What error do you see in the log? regards, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Is-the-vm-prorocol-Supported-in-Karaf-tp3324263p3324467.html Sent from the Karaf - User mailing list archive at Nabble.com.
A Weird Issue
Hello all, I have a weird issue. It is not a major issue and I can get around it, but I thought I should raise it just in case someone else has a similar problem. My development environment is eclipse. My bundles are PDE projects so it is easy for me to run and debug during development. I have a run configuration which starts the equinox OSGi framework and loads and starts my bundles. As part of my development I have used java security extensions and the java sax parser. Equinox in eclipse is quite happy with me not defining javax.security.* and org.xml.sax.* as imports in my manifests (as I assume these are standard parts of java). I am using java security and the SAX parser from a wab. As I got some significant functionality running in equinox, I have decided to move back to karaf 2.2.3 (felix) for an integration test, test cellar/hazelcast with multiple machines etc. and try to figure out how I would deploy. I currently have all these features installed (as well as my own features): State Version Name Repository Description [installed ] [1.9.3 ] hazelcast repo-0 In memory data grid [installed ] [2.2.2 ] cellar repo-0 Karaf clustering [installed ] [2.2.2 ] cellar-webconsole repo-0 Karaf Cellar Webconsole Plugin [installed ] [3.0] guice repo-0 Google Guice [installed ] [3.0.6.RELEASE ] spring karaf-2.2.3 [installed ] [1.2.1 ] spring-dm karaf-2.2.3 [installed ] [2.2.3 ] obr karaf-2.2.3 [installed ] [2.2.3 ] config karaf-2.2.3 [installed ] [7.4.5.v20110725] jetty karaf-2.2.3 [installed ] [2.2.3 ] http karaf-2.2.3 [installed ] [2.2.3 ] war karaf-2.2.3 [installed ] [2.2.3 ] webconsole-base karaf-2.2.3 [installed ] [2.2.3 ] webconsole karaf-2.2.3 [installed ] [2.2.3 ] ssh karaf-2.2.3 [installed ] [2.2.3 ] management karaf-2.2.3 [installed ] [1.2.0-SNAPSHOT ] shiro-core shiro-1.2.0-SNAPSHOT [installed ] [1.2.0-SNAPSHOT ] shiro-web shiro-1.2.0-SNAPSHOT [installed ] [5.5.0 ] activemq activemq-5.5.0 [installed ] [5.5.0 ] activemq-blueprint activemq-5.5.0 [installed ] [5.5.0 ] activemq-web-console activemq-5.5.0 When I try and run my wars in Karaf, karaf throws ClassNotFound exceptions on the following: org.xml.sax.helpers.DefaultHandler javax.security.auth.x500.X500Principal I added the org.xml.sax.helpers and javax.security.auth.x500 into the relevant Manifests and Karaf is now happy (I don't need to add org.xml.sax.* or javax.security.* even though I use other sax and java security classes). Thinking this is a Felix issue, I created two vanilla instances of Karaf 2.2.3, one using equinox and one using Felix. I created a test bundle which did the following (without adding org.xml.sax.helpers or javax.security.auth.x500 to the manifest): package com.mytestcompany.importtest; import javax.security.auth.x500.X500Principal; import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext; import org.xml.sax.helpers.DefaultHandler; public class Activator implements BundleActivator { @Override public void start(BundleContext arg0) throws Exception { X500Principal principal = new X500Principal("CN=Fred"); DefaultHandler defaultHandler = new DefaultHandler(); System.out.println("Initialized successfully"); } . . } Both the equinox and felix versions of Karaf printed "Initialized successfully" without a problem...so it doesn't appear there is an obvious problem with felix. Any ideas what may be messed up? I was wondering whether it could be a pax web issue, but I have Pax Web setup in eclipse as well (though I am using pax web 1.1.1./pax url 1.3.4 in equinox vs. pax web 1.0.6/pax url 1.2.8 as part of Karaf 2.2.3). I guess this isn't that important as I can get around it, but if anyone had any ideas what could cause the issue it would be much appreciated. thanks in advance, Gareth
Re: Cellar 2.2.2 Issues
JIRA added: https://issues.apache.org/jira/browse/KARAF-859 regards, Gareth On Fri, Sep 9, 2011 at 3:40 AM, iocanel [via Karaf] wrote: > Here is how it was supposed to work. > When you add the repositories, the "add repository event" should be > broadcasted to the other nodes of the group (as long as feature syncing on > this group is enabled). The same should happen when you remove the url. > It seems that that it works, for adding the repository, but doesn't work for > removing the repository. This is why when you restart the node, it is > pulling it back in. > Could you add a jira for it? > -- > Ioannis Canellos > FuseSource > > Blog: http://iocanel.blogspot.com > Apache Karaf Committer & PMC > Apache ServiceMix Committer > Apache Gora Committer > > > > > > Ioannis Canellos http://iocanel.blogspot.com > > > If you reply to this email, your message will be added to the discussion > below: > http://karaf.922171.n3.nabble.com/Cellar-2-2-2-Issues-tp3320524p3322187.html > To unsubscribe from Cellar 2.2.2 Issues, click here. -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-2-2-2-Issues-tp3320524p3323796.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Adding Additional Cellar Map Configuration?
Hello Ioannis, I checked out the cellar-2.2.x branch: svn co http://svn.apache.org/repos/asf/karaf/cellar/branches/cellar-2.2.x and tried to do a build. It appears to be depending on 3.0.0-SNAPSHOT poms: [ERROR] The build could not read 1 project -> [Help 1] org.apache.maven.project.ProjectBuildingException: Some problems were encountered while processing the POMs: [FATAL] Non-resolvable parent POM: Could not find artifact org.apache.karaf:cellar:pom:3.0.0-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 24, column 13: . . [ERROR] [ERROR] The project org.apache.karaf.cellar:org.apache.karaf.cellar.dosgi:3.0.0-SNAPSHOT (/Users/gcollins/software/cellar-2.2.x/dosgi/pom.xml) has 1 error [ERROR] Non-resolvable parent POM: Could not find artifact org.apache.karaf:cellar:pom:3.0.0-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 24, column 13 -> [Help 2] org.apache.maven.model.resolution.UnresolvableModelException: Could not find artifact org.apache.karaf:cellar:pom:3.0.0-SNAPSHOT Did I miss something obvious? thanks in advance, Gareth On Wed, Sep 7, 2011 at 6:13 PM, iocanel [via Karaf] wrote: > The branch 2.2.x is now back to stable. > > On Wed, Sep 7, 2011 at 10:28 AM, Ioannis Canellos <[hidden email]> wrote: >> >> You will need to wait a bit before you try 842. The branch needs some more >> fixes. >> >>> >>> -- >>> Ioannis Canellos >>> http://iocanel.blogspot.com >>> Apache Karaf Committer & PMC >>> Apache ServiceMix Committer >>> Apache Gora Committer >>> >>> >>> >>> >>> >> >> >> >> -- >> Ioannis Canellos >> http://iocanel.blogspot.com >> Apache Karaf Committer & PMC >> Apache ServiceMix Committer >> Apache Gora Committer >> >> >> >> >> > > > > -- > Ioannis Canellos > http://iocanel.blogspot.com > Apache Karaf Committer & PMC > Apache ServiceMix Committer > Apache Gora Committer > > > > > > Ioannis Canellos http://iocanel.blogspot.com > > > If you reply to this email, your message will be added to the discussion > below: > http://karaf.922171.n3.nabble.com/Adding-Additional-Cellar-Map-Configuration-tp3305655p3318095.html > To unsubscribe from Adding Additional Cellar Map Configuration?, click here. -- View this message in context: http://karaf.922171.n3.nabble.com/Adding-Additional-Cellar-Map-Configuration-tp3305655p3323570.html Sent from the Karaf - User mailing list archive at Nabble.com.
Cellar 2.2.2 Issues
Hello, I appear to be noticing a couple of issues with cellar. Background: My environment: Red Hat EL5 x64 (2.6.18-164.el5), JDK 1.6.0.27 (two machines - used multicast for hazelcast discovery) Features currently installed: State Version Name Repository Description [installed ] [1.9.3 ] hazelcastrepo-0 In memory data grid [installed ] [2.2.2 ] cellar repo-0 Karaf clustering [installed ] [2.2.2 ] cellar-webconsolerepo-0 Karaf Cellar Webconsole Plugin [installed ] [3.0] guicerepo-0 Google Guice [installed ] [3.0.6.RELEASE ] spring karaf-2.2.3 [installed ] [1.2.1 ] spring-dm karaf-2.2.3 [installed ] [2.2.3 ] obr karaf-2.2.3 [installed ] [2.2.3 ] config karaf-2.2.3 [installed ] [7.4.5.v20110725] jetty karaf-2.2.3 [installed ] [2.2.3 ] http karaf-2.2.3 [installed ] [2.2.3 ] war karaf-2.2.3 [installed ] [2.2.3 ] webconsole-base karaf-2.2.3 [installed ] [2.2.3 ] webconsole karaf-2.2.3 [installed ] [2.2.3 ] ssh karaf-2.2.3 [installed ] [2.2.3 ] management karaf-2.2.3 [installed ] [1.2.0-SNAPSHOT ] shiro-core shiro-1.2.0-SNAPSHOT [installed ] [1.2.0-SNAPSHOT ] shiro-web shiro-1.2.0-SNAPSHOT [installed ] [5.5.0 ] activemq activemq-5.5.0 [installed ] [5.5.0 ] activemq-blueprint activemq-5.5.0 [installed ] [5.5.0 ] activemq-web-console activemq-5.5.0 Feature URLs: Loaded URI truemvn:org.apache.karaf.cellar/apache-karaf-cellar/2.2.2/xml/features truemvn:org.jclouds.karaf/feature/1.0.0/xml/features true mvn:org.apache.karaf.assemblies.features/enterprise/2.2.3/xml/features true mvn:org.apache.karaf.assemblies.features/standard/2.2.3/xml/features true https://repository.apache.org/content/groups/snapshots-group/org/apache/shiro/shiro-features/1.2.0-SNAPSHOT/shiro-features-1.2.0-20110813.220457-24-features.xml truemvn:org.apache.activemq/activemq-karaf/5.5.0/xml/features I was playing with feature urls on one of the machines (machine1). I added and then deleted two feature urls on this machine (I as making sure I understood how cellar works): http://machine1:8999/myproduct-features.xml file:///home/osgi/myproduct-hosting/myproduct-features.xml Whilst leaving karaf on machine2 running, I restarted karaf on machine1. When I run features:listurl after a little while, even though I deleted these two urls previously, the two urls return! If I repeat the process (delete the urls, restart karaf on machine1), the urls again return. I played around a little more, restarting several times. I found if I ran features:listurl immediately after I see the karaf prompt for the first time, I am able to see the url list without these deleted urls (of course, a few seconds later they appear). I also found that sometimes I can hang karaf completely (requiring kill -9 to stop) when calling features:listurl, though I haven't been able to reproduce this consistently. Anything I could be obviously messing up? Any other information I could provide to help you figure out what may be going wrong (I don't see anything terribly exciting in the log)? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-2-2-2-Issues-tp3320524p3320524.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Adding Additional Cellar Map Configuration?
Hello Ioannis, Making it possible to define the Hazelcast configuration would work. Great! I guess there may be be still one classloader issue with this distinct from issue 842. If you defined a map store in this configuration, I suspect hazelcast may look for this map store class before any bundles have imported the hazelcast service: ... ... com.mytestcompany.examples.DummyStore 0 Anyway, would you like me to give the 842 change a try? Is there an easy way to get the cellar 2.2.3 nightly (without having to build myself)? thanks again, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Adding-Additional-Cellar-Map-Configuration-tp3305655p3314719.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Configuring Cellar For TCP Instead Of Multicast
Hello Ioannis, I tried again with an even more basic configuration (base karaf + features:install cellar): State Version Name Repository Description [installed ] [1.9.3 ] hazelcastrepo-0 In memory data grid [installed ] [2.2.2 ] cellar repo-0 Karaf clustering [installed ] [1.0.0 ] jclouds repo-0 JClouds [installed ] [3.0] guicerepo-0 Google Guice [installed ] [3.0.6.RELEASE ] spring karaf-2.2.3 [installed ] [1.2.1 ] spring-dm karaf-2.2.3 [installed ] [2.2.3 ] config karaf-2.2.3 [installed ] [2.2.3 ] ssh karaf-2.2.3 [installed ] [2.2.3 ] management karaf-2.2.3 I still had the same problem. In fact, I found the problem even worse. As soon as you turn multicast off from the file and restart karaf, the cellar features bundle hangs on startup: [ 67] [Active ] [Created ] [ ] [ 60] Apache Karaf :: Cellar :: Core (2.2.2) [ 68] [Active ] [Created ] [ ] [ 60] Apache Karaf :: Cellar :: Config (2.2.2) [ 69] [Active ] [GracePeriod ] [ ] [ 60] Apache Karaf :: Cellar :: Features (2.2.2) [ 70] [Active ] [Created ] [ ] [ 60] Apache Karaf :: Cellar :: Bundle (2.2.2) [ 71] [Active ] [Created ] [ ] [ 60] Apache Karaf :: Cellar :: Utils (2.2.2) [ 72] [Active ] [Created ] [ ] [ 60] Apache Karaf :: Cellar :: Shell (2.2.2) [ 73] [Active ] [] [Started] [ 60] Apache Karaf :: Cellar :: Hazelcast (2.2.2) [ 74] [Active ] [Created ] [ ] [ 60] Apache Karaf :: Cellar :: Management (2.2.2) I have to shutdown karaf with "kill -9" (as shutdown doesn't work). Even if I now turn multicast back on in the instance config file and restart, the "Cellar :: Features" bundle continues to hang. I don't see anything too exciting in the logs which could point me to the issue. I only see this on shutdown: 2011-09-06 15:29:23,259 | DEBUG | lixDispatchQueue | framework | ? ? | 0 - org.apache.felix.framework - 3.0.9 | BundleEvent STOPPED2011-09-06 15:29:23,259 | DEBUG | nt Dispatcher: 1 | BlueprintListener| raf.shell.osgi.BlueprintListener 85 | 30 - org.apache.karaf.shell.osgi - 2.2.3 | Blueprint app state changed to Destroying for bundle 74 2011-09-06 15:29:23,259 | WARN | hz.UDP.Sender| ManagementCenterService | dardLoggerFactory$StandardLogger 62 | - - | [cellar] sleep interrupted java.lang.InterruptedException: sleep interruptedat java.lang.Thread.sleep(Native Method)[:1.6.0_27] at com.hazelcast.impl.management.ManagementCenterService$UDPSender.run(ManagementCenterService.java:212)[42:hazelcast:1.9.3]2011-09-06 15:29:23,260 | DEBUG | FelixStartLevel | management | ? ? | 74 - org.apache.karaf.cellar.management - 2.2.2 | ServiceEvent UNREGISTERING 2011-09-06 15:29:23,261 | DEBUG | FelixStartLevel | ReferenceRecipe | eprint.container.ReferenceRecipe 152 | 9 - org.apache.aries.blueprint - 0.3.1 | Unbinding reference mbeanServer If you could let me know what else I can try, it would be much appreciated. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Configuring-Cellar-For-TCP-Instead-Of-Multicast-tp3305497p3314614.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Configuring Cellar For TCP Instead Of Multicast
Hello Ioannis, I did try Jean-Baptiste's suggestion. I shut both instances down, added the tcpIpMembers to etc/org.apache.karaf.cellar.instance.cfg, started both instances up and it still didn't work. When I checked the etc/org.apache.karaf.cellar.instance.cfg configuration files, the tcpIpMembers field was empty: tcpIpMembers = Just to make sure I didn't miss something, I repeated the test with quotes around the tcpIpMembers: tcpIpMembers="192.168.204.123,192.168.204.124" with the same result. I haven't installed the cellar-cloud feature. I do notice another weird thing though...which I am not sure is related. I do see lots of duplicate fileinstall instances build up (see attached screenshot). I guess another minor point to all this. I will want to setup an apache activemq instance for each instance. The activemq instance will need different configuration for each instance. By default, when I create an activemq instance via activemq:create-broker a broker blueprint configuration file will be dropped in the deploy directory. Cellar aurtomatically tries to replicate anything in the deploy directory, doesn't it? What is the correct way of stopping it from doing this? Anyway, I guess I will try again with as few features installed as possible to see if I can get it to work. Any suggestions on what to try next would be much appreciated. thanks again, Gareth http://karaf.922171.n3.nabble.com/file/n3314332/Screen_shot_2011-09-06_at_1.53.40_PM.png -- View this message in context: http://karaf.922171.n3.nabble.com/Configuring-Cellar-For-TCP-Instead-Of-Multicast-tp3305497p3314332.html Sent from the Karaf - User mailing list archive at Nabble.com.
Adding Additional Cellar Map Configuration?
Hello all, Is there any way to add additional Cellar/Hazelcast configuration? For example, if I reuse the existing Hazelcast service I may want to add persistence to my Maps (this will be another classloader problem as I would need to define my MapStore implementation in the configuration)...or I may want to use a Hazelcast map as a cache with an eviction policy...or it may be useful to define a near cache. Is this possible now? If not, could this feature be added to the roadmap for Cellar? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Adding-Additional-Cellar-Map-Configuration-tp3305655p3305655.html Sent from the Karaf - User mailing list archive at Nabble.com.
Configuring Cellar For TCP Instead Of Multicast
kpoint Worker | MessageDatabase | emq.store.kahadb.MessageDatabase 1280 | 81 - org.apache.activemq.activemq-core - 5.5.0 | Checkpoint done. 2011-09-02 18:36:20,371 | INFO | .1.ServiceThread | ClusterManager | dardLoggerFactory$StandardLogger 62 | - - | [cellar] Members [2] { Member [192.168.204.123:5701] this Member [192.168.204.124:5701] } 2011-09-02 18:36:22,264 | DEBUG | heckpoint Worker | MessageDatabase | emq.store.kahadb.MessageDatabase 1161 | 81 - org.apache.activemq.activemq-core - 5.5.0 | Checkpoint started. 2011-09-02 18:36:22,268 | DEBUG | heckpoint Worker | MessageDatabase | emq.store.kahadb.MessageDatabase 1280 | 81 - org.apache.activemq.activemq-core - 5.5.0 | Checkpoint done. 2011-09-02 18:36:22,377 | DEBUG | hz.1.InThread| Connection | dardLoggerFactory$StandardLogger 62 | - - | [cellar] Connection lost /192.168.204.124:34112 2011-09-02 18:36:22,377 | WARN | hz.1.InThread| ReadHandler | dardLoggerFactory$StandardLogger 62 | - - | [cellar] hz.1.InThread Closing socket to endpoint Address[192.168.204.124:5701], Cause:java.io.EOFException 2011-09-02 18:36:22,377 | INFO | .1.ServiceThread | ClusterManager | dardLoggerFactory$StandardLogger 62 | - - | [cellar] Removing Address Address[192.168.204.124:5701] 2011-09-02 18:36:22,382 | INFO | .1.ServiceThread | ClusterManager | dardLoggerFactory$StandardLogger 62 | - - | [cellar] Members [1] { Member [192.168.204.123:5701] this } 2011-09-02 18:36:22,390 | INFO | hz.1.InThread| InSelector | dardLoggerFactory$StandardLogger 62 | - - | [cellar] 5701 is accepting socket connection from /192.168.204.124:38994 2011-09-02 18:36:22,391 | INFO | hz.1.InThread| InSelector | dardLoggerFactory$StandardLogger 62 | - - | [cellar] 5701 is accepted socket connection from /192.168.204.124:38994 I am sure I have missed something obvious here (though I haven't found it yet in the docs). Anything obvious I have messed up? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Configuring-Cellar-For-TCP-Instead-Of-Multicast-tp3305497p3305497.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar And Hazelcast Questions
Hello Ioannis, Thank you very much for all your responses. It has been very helpful. Please bear with me. I must be missing something fundamental here. I realize there are two variables in play here rather than one (i.e. who initializes the Hazelcast instance, whether the TCCL is used). I don't believe Hazelcast can inherit my TCCL if my bundle only uses Hazelcast as a service, can it? When you ran your test, was the Hazelcast instance initialized as part of the same bundle? I did run through a few scenarios to confirm my understanding: (1) Use existing Karaf Hazelcast instance, use TCCL - FAILED (2) Create my own Hazelcast instance, don't use TCCL - FAILED (3) Create my own Hazelcast instance, use TCCL - SUCCESS! So it appears that unless I create my own Hazelcast instance, Hazelcast cannot see my classes (unless I use fragments, of course). Have I missed something obvious here? Would it be an idea to get cellar to use my hazelcast instance? Thus in my bundle which starts hazelcast, I could make sure I import both cellar and my own packages. Thus hazelcast can load both cellar and my classes via the TCCL. Would that make sense? Thank you very much again for all your help. regards, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-And-Hazelcast-Questions-tp3184320p3294046.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar And Hazelcast Questions
Hello Ioannis, I was wondering. Would using fragments be a reasonable way to solve Hazelcast classloading issues (i.e. any class which may need to be serialized/deserialized by Hazelcast, I should include in a fragment which is attached to the Hazelcast bundle)? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-And-Hazelcast-Questions-tp3184320p3276278.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Assigning Wars To Ports And Subdirectories?
Hello Achim, The virtual hosts feature is probably exactly what I need. As you suggested, for now I will use Apache as a reverse proxy to hide those services I don't want to make publicly available. thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Assigning-Wars-To-Ports-And-Subdirectories-tp3210710p3219163.html Sent from the Karaf - User mailing list archive at Nabble.com.
A Couple Of Karaf Exceptions From The Web Console
Hello, I am seeing a couple of exceptions in the log when I run the Karaf web console (when I jump to the bundles tab, I believe). Is this an issue that these exceptions are occurring? thanks, Gareth Exception 1: 01:40:05,901 | WARN | em/console/admin | /| .eclipse.jetty.util.log.Slf4jLog 50 | 46 - org.eclipse.jetty.util - 7.4.2.v20110526 | org.ops4j.pax.web.service.spi.model.ServletModel-5: Failed to instantiate plugin org.apache.felix.webconsole.internal.deppack.DepPackServlet java.lang.NoClassDefFoundError: org/osgi/service/deploymentadmin/DeploymentException at java.lang.Class.getDeclaredConstructors0(Native Method)[:1.6.0_26] at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)[:1.6.0_26] at java.lang.Class.getConstructor0(Class.java:2699)[:1.6.0_26] at java.lang.Class.newInstance0(Class.java:326)[:1.6.0_26] at java.lang.Class.newInstance(Class.java:308)[:1.6.0_26] at org.apache.felix.webconsole.internal.servlet.PluginHolder$InternalPlugin.doGetConsolePlugin(PluginHolder.java:761)[66:org.apache.karaf.webconsole.console:2.2.2] at org.apache.felix.webconsole.internal.servlet.PluginHolder$Plugin.getConsolePlugin(PluginHolder.java:532)[66:org.apache.karaf.webconsole.console:2.2.2] at org.apache.felix.webconsole.internal.servlet.PluginHolder.getLocalizedLabelMap(PluginHolder.java:242)[66:org.apache.karaf.webconsole.console:2.2.2] at org.apache.felix.webconsole.internal.servlet.OsgiManager.service(OsgiManager.java:420)[66:org.apache.karaf.webconsole.console:2.2.2] at org.apache.felix.webconsole.internal.servlet.OsgiManager.service(OsgiManager.java:384)[66:org.apache.karaf.webconsole.console:2.2.2] at org.apache.felix.webconsole.internal.KarafOsgiManager.doService(KarafOsgiManager.java:67)[66:org.apache.karaf.webconsole.console:2.2.2] at org.apache.felix.webconsole.internal.KarafOsgiManager$1.run(KarafOsgiManager.java:47)[66:org.apache.karaf.webconsole.console:2.2.2] at java.security.AccessController.doPrivileged(Native Method)[:1.6.0_26] at javax.security.auth.Subject.doAs(Subject.java:396)[:1.6.0_26] at org.apache.felix.webconsole.internal.KarafOsgiManager.service(KarafOsgiManager.java:45)[66:org.apache.karaf.webconsole.console:2.2.2] at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:538)[54:org.eclipse.jetty.servlet:7.4.2.v20110526] at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:478)[54:org.eclipse.jetty.servlet:7.4.2.v20110526] at org.ops4j.pax.web.service.jetty.internal.HttpServiceServletHandler.doHandle(HttpServiceServletHandler.java:70)[63:org.ops4j.pax.web.pax-web-jetty:1.0.4] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)[52:org.eclipse.jetty.server:7.4.2.v20110526] at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:480)[53:org.eclipse.jetty.security:7.4.2.v20110526] at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:225)[52:org.eclipse.jetty.server:7.4.2.v20110526] at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:937)[52:org.eclipse.jetty.server:7.4.2.v20110526] at org.ops4j.pax.web.service.jetty.internal.HttpServiceContext.doHandle(HttpServiceContext.java:116)[63:org.ops4j.pax.web.pax-web-jetty:1.0.4] at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406)[54:org.eclipse.jetty.servlet:7.4.2.v20110526] at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)[52:org.eclipse.jetty.server:7.4.2.v20110526] at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:871)[52:org.eclipse.jetty.server:7.4.2.v20110526] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)[52:org.eclipse.jetty.server:7.4.2.v20110526] at org.ops4j.pax.web.service.jetty.internal.JettyServerHandlerCollection.handle(JettyServerHandlerCollection.java:72)[63:org.ops4j.pax.web.pax-web-jetty:1.0.4] at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:110)[52:org.eclipse.jetty.server:7.4.2.v20110526] at org.eclipse.jetty.server.Server.handle(Server.java:342)[52:org.eclipse.jetty.server:7.4.2.v20110526] at org.eclipse.jetty.server.HttpConnection.handleRequest(HttpConnection.java:589)[52:org.eclipse.jetty.server:7.4.2.v20110526] at org.eclipse.jetty.server.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:1048)[52:org.eclipse.jetty.server:7.4.2.v20110526] at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:601)[48:org.eclipse.jetty.http:7.4.2.v20110526] at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:214)[48:org.eclipse.jetty.http:7.4.2.v20110526] at
Re: Assigning Wars To Ports And Subdirectories?
Hello JB, I didn't realize I could do this for non-wab wars: > install > webbundle:file:./MyWebApp.war?Bundle-SymbolicName=com.mycompany.mywebapp&Web-ContextPath=/hosting/com.mycompany.mywebapp& custom parameters defined by me> ...so I guess that helps part of my problem. I still hope to find a way to restrict specific web applications to certain ports though... Thanks again. regards, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Assigning-Wars-To-Ports-And-Subdirectories-tp3210710p3211252.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Assigning Wars To Ports And Subdirectories?
Hello JB, I knew about the Web-ContextPath in the MANIFEST.MF. I was just hoping there was a way to be able to do this without making all wars into wabs :). For these wars, could I perhaps create my own bundle listener, make it run before Pax Web (is that possible?) and go add the Web-ContextPath into the MANIFEST.MF before PAX Web reads it (is that possible?)? I also gave your two suggestions a try: (1) With the first suggestion, I got more ports configured for Jetty, but all my wars were still available on all ports. (2) I added the file install configuration (I see now two file install configurations in the Karaf console). I can now add files from the new location...though web applications are still installed in the same dir. There is no way to run multiple Pax Web instances in the same Karaf instance, is there (with the ability to explicitly assign each war to a specific instance?)? Thank you very much for the help. regards, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Assigning-Wars-To-Ports-And-Subdirectories-tp3210710p3211059.html Sent from the Karaf - User mailing list archive at Nabble.com.
Assigning Wars To Ports And Subdirectories?
Hello, I am not sure whether this is a PAX Web question, or a Karaf question but: (1) If I install a war/wab file, is there any way of making it available on certain ports? For example, I would probably want the Karaf console and Active MQ console on a separate port to my application wabs/wars (as my wabs/wars may be visible through a firewall whilst Karaf console should not be). (2) Any way of installing wars (which are not OSGi wabs) to subdirectories? In tomcat I could do this by adding a "#" as part of the war name (e.g. hosting#fred.war could be accessed by http://hostname/hosting/fred.war). There isn't a similar feature in Karaf/PAX Web, is there? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Assigning-Wars-To-Ports-And-Subdirectories-tp3210710p3210710.html Sent from the Karaf - User mailing list archive at Nabble.com.
Using Spring DM For New Software?
Hello, In a earlier question today I mentioned that when I tried to use Spring security in OSGi, I had to include all the Spring + spring security jars in my wab, which was not ideal. I found out what the problem was. I had to include Spring DM web to be able to remove the jars. The following was added to my web.xml: contextClass org.springframework.osgi.web.context.support.OsgiBundleXmlWebApplicationContext Since this is new code, I was a little uncomfortable adding a dependency on something which was essentially "EOL" so I had a look to see if I could do the same with Blueprint (as we are already planning to use blueprint for other non-web bundles). Unfortunately the Spring DM web functionality doesn't appear to be covered in the Blueprint spec (and I don't see anything currently in Aries). So my question is - is there another method for including Spring in a wab? If not and I continue with Karaf, what is the recommended approach going forward? Continue using Spring DM web for the foreseeable future? Avoid creating dependencies on Spring DM web and include the Spring jars in every wab? Any other options in the pipeline (will Aries eventually have a similar feature, or would a replacement come as part of PAX Web?)? Any suggestions/info/guidance would be much appreciated. thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Using-Spring-DM-For-New-Software-tp3205466p3205466.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Mixing Jetty Security and Spring Security In Karaf
Karaf issue added (attached the war and the eclipse project): https://issues.apache.org/jira/browse/KARAF-785 thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Mixing-Jetty-Security-and-Spring-Security-In-Karaf-tp3202093p3204545.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Mixing Jetty Security and Spring Security In Karaf
Is this a test case created using Pax Exam (I haven't played with that yet :)) Would it be OK, if I just sent the created war, my source (my Eclipse project), and instructions on what to do to reproduce? Just confirming - I should create an issue BOTH in Pax and in Karaf? thanks in advance, Gareth On Wed, Jul 27, 2011 at 3:15 AM, Achim Nierbeck [via Karaf] wrote: > Oh, and could you please provide a testcase so I can test this feature? > It would really be great if this could also be used as a iTest for pax web > :-) > > thanx, Achim > > 2011/7/27 Achim Nierbeck <[hidden email]>: >> Hi Gareth, >> >> this is probably more Pax-Web related I guess I have to see into this. >> Could you please open an issue on Karaf and Pax Web so I can keep >> track on this :-) >> >> thanx, Achim >> >> 2011/7/27 Gareth <[hidden email]>: >>> Hello, >>> >>> I started playing with Spring Security in a wab I installed to Karaf. I >>> got >>> it working (with some kludges -> I currently need to include all the >>> spring >>> and spring-security jars in my war for spring to see the spring security >>> namespace) which is great. >>> >>> I did see some weird behavior though. Once I login, and a session is >>> created, subsequent requests to the same web application are intercepted >>> by >>> the jetty security (which is used by the karaf console). My web >>> application >>> still works, but jetty complains my spring security user doesn't exist >>> (as >>> only spring security, not jetty, knows about this user): >>> >>> 21:23:23,606 | WARN | 37-65 - /sst/sst | log >>> | >>> .eclipse.jetty.util.log.Slf4jLog 50 | 46 - org.eclipse.jetty.util - >>> 7.4.2.v20110526 | EXCEPTION >>> javax.security.auth.login.FailedLoginException: User rod does not exist >>> at >>> >>> org.apache.karaf.jaas.modules.properties.PropertiesLoginModule.login(PropertiesLoginModule.java:98) >>> at >>> >>> org.apache.karaf.jaas.boot.ProxyLoginModule.login(ProxyLoginModule.java:83)[karaf-jaas-boot.jar:] >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native >>> Method)[:1.6.0_26] >>> at >>> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)[:1.6.0_26] >>> at >>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)[:1.6.0_26] >>> at java.lang.reflect.Method.invoke(Method.java:597)[:1.6.0_26] >>> at >>> >>> javax.security.auth.login.LoginContext.invoke(LoginContext.java:769)[:1.6.0_26] >>> at >>> >>> javax.security.auth.login.LoginContext.access$000(LoginContext.java:186)[:1.6.0_26] >>> at >>> javax.security.auth.login.LoginContext$4.run(LoginContext.java:683) >>> at java.security.AccessController.doPrivileged(Native >>> Method)[:1.6.0_26] >>> at >>> >>> javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)[:1.6.0_26] >>> at >>> >>> javax.security.auth.login.LoginContext.login(LoginContext.java:579)[:1.6.0_26] >>> at >>> >>> org.eclipse.jetty.plus.jaas.JAASLoginService.login(JAASLoginService.java:203)[59:org.eclipse.jetty.plus:7.4.2.v20110526] >>> at >>> >>> org.eclipse.jetty.security.authentication.BasicAuthenticator.validateRequest(BasicAuthenticator.java:77)[53:org.eclipse.jetty.security:7.4.2.v20110526] >>> at >>> >>> org.eclipse.jetty.security.authentication.DeferredAuthentication.authenticate(DeferredAuthentication.java:100)[53:org.eclipse.jetty.security:7.4.2.v20110526] >>> at >>> >>> org.eclipse.jetty.server.Request.getAuthType(Request.java:353)[52:org.eclipse.jetty.server:7.4.2.v20110526] >>> at >>> >>> javax.servlet.http.HttpServletRequestWrapper.getAuthType(HttpServletRequestWrapper.java:59)[43:org.apache.geronimo.specs.geronimo-servlet_2.5_spec:1.1.2] >>> at >>> >>> javax.servlet.http.HttpServletRequestWrapper.getAuthType(HttpServletRequestWrapper.java:59)[43:org.apache.geronimo.specs.geronimo-servlet_2.5_spec:1.1.2] >>> at >>> >>> com.antennasoftware.sst.SSTServlet.service(SSTServlet.java:36)[688:com.antennasoftware.spring-security-test:1.0.0] >>> at >>> >>> j
Hazelcast Clustering and Web Applications?
Hello, Looking into the hazelcast documentation, I understand that hazelcast can support http session clustering. It appears wars can be clustered by running a script which decorates the war with the Hazelcast clustering feature. Would this be a feature you could consider adding into Karaf (i.e. when you deploy a wab, as well as installing on all Karaf instances in the cluster, you could also have an option to add clustering for the wab itself)? Just throwing an idea for a new feature out there. Feel free to say that it wouldn't make sense. thanks, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Hazelcast-Clustering-and-Web-Applications-tp3202342p3202342.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar And Hazelcast Questions
hread.java:680)[:1.6.0_26] I guess this makes it difficult to use Hazelcast predicates in OSGi (for now). thanks again, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-And-Hazelcast-Questions-tp3184320p3202047.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar And Hazelcast Questions
Hello Ioannis, You need the sample to reproduce this exception (you answered my question about hazelcast classloading in the hazelcast forums)?: java.lang.ClassNotFoundException: not found from bundle [org.apache.karaf.cellar.hazelcast] Give me a little while to create a simplified program. thanks again, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-And-Hazelcast-Questions-tp3184320p3192820.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar And Hazelcast Questions
Thankyou again for the response. I was already doing the classloader switch in some threads as I do load jndi properties files. I added the classloader switch to any other thread which accesses Hazelcast. I didn't notice a difference. Should all the issues have been fixed by this change? Could I perhaps fix my problem using fragments? If I created a fragment with the problematic classes, and made the Fragment-Host the hazelcast bundle, would that be a possible workaround? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-And-Hazelcast-Questions-tp3184320p3187402.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar And Hazelcast Questions
Hello JB, Thank you very much for responding so quickly! I am just using the default cluster group (only one machine currently as I am testing on my Mac): karaf@root> cluster:group-list Node Group * 192.168.190.1:5701 default I am getting the HazelcastInstance via the service cellar creates (directly now as I am still exploring OSGi, probably using blueprint in the future): ServiceReference reference = bundleContext.getServiceReference("com.hazelcast.core.HazelcastInstance"); HazelcastInstance instance = (HazelcastInstance) bundleContext.getService(reference); bundleContext.ungetService(reference); Then I am looking up/creating two maps (MyKeyType and MyValueType are defined in the same bundle -> the exception thrown below is for MyKeyType): IMap<String,Long> myMap1 = hazelcastInstance.getMap("myMap1"); IMap<MyKeyType,MyValueType> myMap2 = hazelcastInstance.getMap("myMap2"); Is that enough information? Am I doing anything I shouldn't be doing? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-And-Hazelcast-Questions-tp3184320p3185631.html Sent from the Karaf - User mailing list archive at Nabble.com.
Cellar And Hazelcast Questions
Hello, I would like to use Karaf as well as Cellar. I also would like to use Hazelcast for my program (since it is now getting a lot of Apache integration vs. other open-source clustering software). So I have some questions: (1) Is there an issue if I piggyback on the Cellar Hazelcast cluster, or should I make a completely separate one? Just as a note, I am seeing some exceptions when I uninstall my bundles after using Hazelcast: java.lang.ClassNotFoundException: not found from bundle [org.apache.karaf.cellar.hazelcast] at org.springframework.osgi.util.BundleDelegatingClassLoader.findClass(BundleDelegatingClassLoader.java:103) at org.springframework.osgi.util.BundleDelegatingClassLoader.loadClass(BundleDelegatingClassLoader.java:156) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) . . Caused by: java.lang.ClassNotFoundException: not found by org.apache.karaf.cellar.hazelcast [141] at org.apache.felix.framework.ModuleImpl.findClassOrResourceByDelegation(ModuleImpl.java:787) at org.apache.felix.framework.ModuleImpl.access$400(ModuleImpl.java:71) at org.apache.felix.framework.ModuleImpl$ModuleClassLoader.loadClass(ModuleImpl.java:1768) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at org.apache.felix.framework.ModuleImpl.getClassByDelegation(ModuleImpl.java:645) (2) I see there is an OSGi classloader problem with the Hazelcast Cluster Monitor. If you use any custom classes in an IMap it cannot find them. e.g.: Caused by: java.lang.ClassNotFoundException: org.apache.karaf.cellar.core.Group not found by com.hazelcast.monitor [143] at org.apache.felix.framework.ModuleImpl.findClassOrResourceByDelegation(ModuleImpl.java:787) at org.apache.felix.framework.ModuleImpl.access$400(ModuleImpl.java:71) at org.apache.felix.framework.ModuleImpl$ModuleClassLoader.loadClass(ModuleImpl.java:1768) at java.lang.ClassLoader.loadClass(ClassLoader.java:247)[:1.6.0_26] Is there any way around this? Is using the Hazelcast cluster monitor to be avoided for OSGi/Keraf Cellar? thanks in advance, Gareth -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-And-Hazelcast-Questions-tp3184320p3184320.html Sent from the Karaf - User mailing list archive at Nabble.com.