Re: Missing error logs when running in Docker

2023-03-14 Thread Daniel Las
Sure, thank you.

wt., 14 mar 2023 o 11:02 Jean-Baptiste Onofré  napisał(a):

> Hi,
>
> OK I will check on 4.2.x (let me complete ActiveMQ release first).
> I will keep you posted.
>
> Regards
> JB
>
> On Mon, Mar 13, 2023 at 11:59 AM Daniel Las 
> wrote:
> >
> > Hi,
> >
> > I didn't try Karaf 4.4.x, I need to solve this issue in Karaf 4.2.x.
> Could you please take a look at 4.2.x? Thanks in advance.
> >
> > Best regards
> >
> > pon., 13 mar 2023 o 09:56 Jean-Baptiste Onofré 
> napisał(a):
> >>
> >> Did you try with Karaf 4.4.x ? I can take a look on 4.2.x but it's
> >> supposed to be "inactive".
> >>
> >> Regards
> >> JB
> >>
> >> On Sun, Mar 12, 2023 at 11:16 AM Daniel Las 
> wrote:
> >> >
> >> > Hi,
> >> >
> >> > This is karaf run used.
> >> >
> >> > Best regards
> >> >
> >> > niedz., 12 mar 2023 o 07:47 Jean-Baptiste Onofré 
> napisał(a):
> >> >>
> >> >> Hi,
> >> >>
> >> >> What command are you using to run karaf ?
> >> >>
> >> >> In docker, karaf run is probably the best option as it outputs the
> log
> >> >> on the console.
> >> >>
> >> >> Regards
> >> >> JB
> >> >>
> >> >> On Sun, Mar 12, 2023 at 6:48 AM Daniel Las 
> wrote:
> >> >> >
> >> >> > Hi,
> >> >> >
> >> >> > We are running a custom Kafaf 4.2.11 distribution as a Docker
> container using console logging appender with the JSON layout. All is
> working fine except missing some error logs. For example, bundles start
> failures are not logged in Docker logs but are visible with log::tail
> command. What might be the reason?
> >> >> >
> >> >> > Best regards
> >> >> > --
> >> >> > Daniel Łaś
> >> >
> >> >
> >> >
> >> > --
> >> > Daniel Łaś
> >
> >
> >
> > --
> > Daniel Łaś
> >
>


-- 
Daniel Łaś


Re: Missing error logs when running in Docker

2023-03-13 Thread Daniel Las
Hi,

I didn't try Karaf 4.4.x, I need to solve this issue in Karaf 4.2.x. Could
you please take a look at 4.2.x? Thanks in advance.

Best regards

pon., 13 mar 2023 o 09:56 Jean-Baptiste Onofré  napisał(a):

> Did you try with Karaf 4.4.x ? I can take a look on 4.2.x but it's
> supposed to be "inactive".
>
> Regards
> JB
>
> On Sun, Mar 12, 2023 at 11:16 AM Daniel Las 
> wrote:
> >
> > Hi,
> >
> > This is karaf run used.
> >
> > Best regards
> >
> > niedz., 12 mar 2023 o 07:47 Jean-Baptiste Onofré 
> napisał(a):
> >>
> >> Hi,
> >>
> >> What command are you using to run karaf ?
> >>
> >> In docker, karaf run is probably the best option as it outputs the log
> >> on the console.
> >>
> >> Regards
> >> JB
> >>
> >> On Sun, Mar 12, 2023 at 6:48 AM Daniel Las 
> wrote:
> >> >
> >> > Hi,
> >> >
> >> > We are running a custom Kafaf 4.2.11 distribution as a Docker
> container using console logging appender with the JSON layout. All is
> working fine except missing some error logs. For example, bundles start
> failures are not logged in Docker logs but are visible with log::tail
> command. What might be the reason?
> >> >
> >> > Best regards
> >> > --
> >> > Daniel Łaś
> >
> >
> >
> > --
> > Daniel Łaś
>


-- 
Daniel Łaś


Re: Missing error logs when running in Docker

2023-03-12 Thread Daniel Las
Hi,

This is *karaf run* used.

Best regards

niedz., 12 mar 2023 o 07:47 Jean-Baptiste Onofré 
napisał(a):

> Hi,
>
> What command are you using to run karaf ?
>
> In docker, karaf run is probably the best option as it outputs the log
> on the console.
>
> Regards
> JB
>
> On Sun, Mar 12, 2023 at 6:48 AM Daniel Las  wrote:
> >
> > Hi,
> >
> > We are running a custom Kafaf 4.2.11 distribution as a Docker container
> using console logging appender with the JSON layout. All is working fine
> except missing some error logs. For example, bundles start failures are not
> logged in Docker logs but are visible with log::tail command. What might be
> the reason?
> >
> > Best regards
> > --
> > Daniel Łaś
>


-- 
Daniel Łaś


Missing error logs when running in Docker

2023-03-11 Thread Daniel Las
Hi,

We are running a custom Kafaf 4.2.11 distribution as a Docker container
using console logging appender with the JSON layout. All is working fine
except missing some error logs. For example, bundles start failures are not
logged in Docker logs but are visible with log::tail command. What might be
the reason?

Best regards
-- 
Daniel Łaś


Re: Decanter issues

2023-01-17 Thread Daniel Las
Good news indeed, thank you. Do you know the time of the next Decanter
release?

One more question: are there any constraints regarding supported bean
names? I'm asking because we publish some additional metrics with quite
exotic names and would like to scrap them as well, for example:

*metrics:name=vertxEventbusHandlers_address_/delayer*

We can adjust bean names but it would be nice to know what are the allowed
name patterns.

Regards

wt., 17 sty 2023 o 14:15 Jean-Baptiste Onofré  napisał(a):

> Hi Daniel,
>
> it looks like a bug with the names containing "-". Let me try to
> reproduce and I will create a Jira.
>
> The good news is that I'm preparing a new Decanter release with a lot
> of updates and fixes. I will include this fix.
>
> Sorry for the inconvenience.
>
> Regards
> JB
>
> On Tue, Jan 17, 2023 at 7:52 AM Daniel Las  wrote:
> >
> > Hi,
> >
> > I'm trying to use Decanter 2.9.0 JMX Collector with Prometheus Appender
> to monitor our applications. I have configured beans to be monitored as:
> >
> > object.name.system=java.lang:*
> >
> > but some metrics are not available. There are some messages logged in
> DEBUG level like:
> >
> > 2023-01-17T06:48:03,607 | DEBUG | Karaf_Worker-10  | BeanHarvester
>   | 72 - org.apache.karaf.decanter.collector.jmx - 2.9.0 |
> Could not read attribute java.lang:type=MemoryPool,name=G1 Survivor Space
> UsageThresholdCount
> >
> > Is it a known issue or there is some workaround to have such JMX beans
> metrics available?
> >
> > I also tried to monitor Kafka consumer metrics:
> >
> > object.name.kafkaConsumer=kafka.consumer:*
> >
> > but such configuration causes following errors to appear in the logs:
> >
> > 2023-01-17T06:50:46,143 | WARN  | EventAdminThread #9 | eventadmin
>  | 2 - org.apache.karaf.services.eventadmin - 4.2.11 |
> EventAdmin: Exception during event dispatch [org.osgi.service.event.Event
> [topic=decanter/collect/jmx/jmx-local/kafka/consumer]
> {hostName=empirica-algo-engine-hft.empirica-crypto, records-lag=0.0,
> felix.fileinstall.filename=file:/opt/algo-engine-hft-4.2.11.4-SNAPSHOT/etc/org.apache.karaf.decanter.collector.jmx-local.cfg,
> type=jmx-local, service.factoryPid=org.apache.karaf.decanter.collector.jmx,
> decanter.collector.name=jmx, records-lead-min=5133554.0,
> records-lead-avg=5135042.1, scheduler.period=60, records-lead=5136728.0,
> scheduler.concurrent=false, component.id=3, karafName=root, host=null,
> scheduler.name=decanter-collector-jmx, object.name.system=java.lang:*,
> timestamp=1673938246142, 
> component.name=org.apache.karaf.decanter.collector.jmx,
> records-lag-avg=0.12, url=local,
> ObjectName=kafka.consumer:type=consumer-fetch-manager-metrics,client-id=consumer-ORDER-TO-TRADE-MEASURES-4,topic=MARKET_DATA_ENRICHED,partition=1,
> service.pid=org.apache.karaf.decanter.collector.jmx.044ca602-2290-4868-a6aa-b77131155312,
> object.name.kafkaConsumer=kafka.consumer:*, records-lag-max=2.0,
> preferred-read-replica=-1, hostAddress=172.22.0.13} |
> [org.osgi.service.event.EventHandler] |
> Bundle(org.apache.karaf.decanter.appender.prometheus [71])]
> > java.lang.IllegalArgumentException: Invalid metric name:
> preferred-read-replica
> > at io.prometheus.client.Collector.checkMetricName(Collector.java:351)
> ~[?:?]
> > at io.prometheus.client.SimpleCollector.(SimpleCollector.java:169)
> ~[?:?]
> > at io.prometheus.client.Gauge.(Gauge.java:69) ~[?:?]
> > at io.prometheus.client.Gauge$Builder.create(Gauge.java:75) ~[?:?]
> > at io.prometheus.client.Gauge$Builder.create(Gauge.java:72) ~[?:?]
> > at
> io.prometheus.client.SimpleCollector$Builder.register(SimpleCollector.java:260)
> ~[?:?]
> > at
> io.prometheus.client.SimpleCollector$Builder.register(SimpleCollector.java:253)
> ~[?:?]
> > at
> org.apache.karaf.decanter.appender.prometheus.PrometheusServlet.handleEvent(PrometheusServlet.java:92)
> ~[?:?]
> > at
> org.apache.felix.eventadmin.impl.handler.EventHandlerProxy.sendEvent(EventHandlerProxy.java:431)
> [!/:?]
> > at
> org.apache.felix.eventadmin.impl.tasks.HandlerTask.run(HandlerTask.java:70)
> [!/:?]
> > at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [?:1.8.0_322]
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_322]
> > at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [?:1.8.0_322]
> > at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [?:1.8.0_322]
> > at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]
> >
> > Best regards
> > --
> > Daniel Łaś
>


-- 
Daniel Łaś


Decanter issues

2023-01-16 Thread Daniel Las
Hi,

I'm trying to use Decanter 2.9.0 JMX Collector with Prometheus Appender to
monitor our applications. I have configured beans to be monitored as:

*object.name.system=java.lang:**

but some metrics are not available. There are some messages logged in DEBUG
level like:

2023-01-17T06:48:03,607 | DEBUG | Karaf_Worker-10  | BeanHarvester
   | 72 - org.apache.karaf.decanter.collector.jmx - 2.9.0 | Could
not read attribute java.lang:type=MemoryPool,name=G1 Survivor Space
UsageThresholdCount

Is it a known issue or there is some workaround to have such JMX beans
metrics available?

I also tried to monitor Kafka consumer metrics:

*object.name.kafkaConsumer=kafka.consumer:**

but such configuration causes following errors to appear in the logs:

2023-01-17T06:50:46,143 | WARN  | EventAdminThread #9 | eventadmin
  | 2 - org.apache.karaf.services.eventadmin - 4.2.11 |
EventAdmin: Exception during event dispatch [org.osgi.service.event.Event
[topic=decanter/collect/jmx/jmx-local/kafka/consumer]
{hostName=empirica-algo-engine-hft.empirica-crypto, records-lag=0.0,
felix.fileinstall.filename=file:/opt/algo-engine-hft-4.2.11.4-SNAPSHOT/etc/org.apache.karaf.decanter.collector.jmx-local.cfg,
type=jmx-local, service.factoryPid=org.apache.karaf.decanter.collector.jmx,
decanter.collector.name=jmx, records-lead-min=5133554.0,
records-lead-avg=5135042.1, scheduler.period=60, records-lead=5136728.0,
scheduler.concurrent=false, component.id=3, karafName=root, host=null,
scheduler.name=decanter-collector-jmx, object.name.system=java.lang:*,
timestamp=1673938246142,
component.name=org.apache.karaf.decanter.collector.jmx,
records-lag-avg=0.12, url=local,
ObjectName=kafka.consumer:type=consumer-fetch-manager-metrics,client-id=consumer-ORDER-TO-TRADE-MEASURES-4,topic=MARKET_DATA_ENRICHED,partition=1,
service.pid=org.apache.karaf.decanter.collector.jmx.044ca602-2290-4868-a6aa-b77131155312,
object.name.kafkaConsumer=kafka.consumer:*, records-lag-max=2.0,
preferred-read-replica=-1, hostAddress=172.22.0.13} |
[org.osgi.service.event.EventHandler] |
Bundle(org.apache.karaf.decanter.appender.prometheus [71])]
java.lang.IllegalArgumentException: Invalid metric name:
preferred-read-replica
at io.prometheus.client.Collector.checkMetricName(Collector.java:351) ~[?:?]
at io.prometheus.client.SimpleCollector.(SimpleCollector.java:169)
~[?:?]
at io.prometheus.client.Gauge.(Gauge.java:69) ~[?:?]
at io.prometheus.client.Gauge$Builder.create(Gauge.java:75) ~[?:?]
at io.prometheus.client.Gauge$Builder.create(Gauge.java:72) ~[?:?]
at
io.prometheus.client.SimpleCollector$Builder.register(SimpleCollector.java:260)
~[?:?]
at
io.prometheus.client.SimpleCollector$Builder.register(SimpleCollector.java:253)
~[?:?]
at
org.apache.karaf.decanter.appender.prometheus.PrometheusServlet.handleEvent(PrometheusServlet.java:92)
~[?:?]
at
org.apache.felix.eventadmin.impl.handler.EventHandlerProxy.sendEvent(EventHandlerProxy.java:431)
[!/:?]
at
org.apache.felix.eventadmin.impl.tasks.HandlerTask.run(HandlerTask.java:70)
[!/:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[?:1.8.0_322]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_322]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[?:1.8.0_322]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[?:1.8.0_322]
at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322]

Best regards
-- 
Daniel Łaś


Re: Features upgrade - keep single instance of bundle

2022-04-19 Thread Daniel Las
Hi,

Yes, the feature name is the same. We invoke the installation via JMX and
use this method:

org.apache.karaf.features.internal.management.FeaturesServiceMBeanImpl.installFeature(String,
String, boolean)

Regards

pt., 15 kwi 2022 o 16:30 Jean-Baptiste Onofré  napisał(a):

> Hi Daniel,
>
> I guess the feature name is the same ? Are you using feature:install
> -u (feature:update) or just feature:install ?
>
> Regards
> JB
>
> On Thu, Apr 14, 2022 at 10:05 AM Daniel Las 
> wrote:
> >
> > Hi,
> >
> > We use Karaf features to upgrade our bundles. Is it possible to keep
> only one instance of the bundle included in a feature if a major version
> changes? If the version changes from 1.0.0 to 1.1.0 there is one instance
> left, but when versions changes backward incompatibly, from 1.0.0 to 2.0.0
> both versions are kept. Can this behavior be changed somehow?
> >
> > Regards
> > --
> > Daniel Łaś
> >
>


-- 
Daniel Łaś


Features upgrade - keep single instance of bundle

2022-04-14 Thread Daniel Las
Hi,

We use Karaf features to upgrade our bundles. Is it possible to keep only
one instance of the bundle included in a feature if a major version
changes? If the version changes from 1.0.0 to 1.1.0 there is one instance
left, but when versions changes backward incompatibly, from 1.0.0 to 2.0.0
both versions are kept. Can this behavior be changed somehow?

Regards
-- 
Daniel Łaś


Blueprint reference listener

2022-04-05 Thread Daniel Las
HI,

Are Blueprint reference listeners bind/unbind methods thread safe? If my
reference listener listens on multiple references of the service, should it
take care about concurrent invocations?

Regards
-- 
Daniel Łaś


Re: Kafka streams in Karaf

2022-03-28 Thread Daniel Las
HI JB,

I tried to start my application using original ServiceMix Kafka Clients and
Kafka Stream bundles.

The problem is in the way Kafka Clients library handles configuration. You
need to pass the class name and the Kafka client creates the instance via
*Class.getDeclaredConstructor().newInstance()*.  This is what is happens
with Kafka Streams, it passes class name and Kafka Clients fails because it
can't find the class.

I went through the Kafka libs source code and found that it uses context
class loader if present. The issues are gone for now after setting my
bundle class loader as context class loader.

Regards



sob., 26 mar 2022 o 07:30 Jean-Baptiste Onofré  napisał(a):

> Hi Daniel,
>
> Generally speaking, a solution that always works is to create your
> bundle with private packages/ship all kafka packages (general comment
> in OSGi/Karaf).
>
> If you want to use the import package approach, the ServiceMix bundles
> should work (I didn't try recently, but I tried while ago).
>
> About your error, I checked and LogAndFailExceptionHandler class is
> present in the kafka-streams bundle.
>
> Do you have the import in your bundle (as I guess the
> kafka.common.Config is in your bundle classloader, so looking for
> LogAndFailExceptionHandler class in the same classloader) ?
>
> Regards
> JB
>
> On Sat, Mar 26, 2022 at 7:01 AM Daniel Las  wrote:
> >
> > Hi,
> >
> > Did anybody manage to start Kafka streams application in Karaf? I tried
> different approaches:
> >
> > * use ServiceMix kafka-clients and kafka-streams bundles (2.8.1_1)
> > * repackage kafka-streams and make it a fragment for kafka-clients
> > * shade my bundle with kafka-clients and kafka-streams included
> >
> > and every time fail with a class loading issue related to default
> configuration template built inside kafka-streams, specifically:
> >
> > Caused by: org.apache.kafka.common.config.ConfigException: Invalid value
> org.apache.kafka.streams.errors.LogAndFailExceptionHandler for
> configuration default.deserialization.exception.handler: Class
> org.apache.kafka.streams.errors.LogAndFailExceptionHandler could not be
> found.
> >
> > Best regards
> >
> > --
> > Daniel Łaś
> >
>


-- 
Daniel Łaś


Kafka streams in Karaf

2022-03-26 Thread Daniel Las
Hi,

Did anybody manage to start Kafka streams application in Karaf? I tried
different approaches:

* use ServiceMix kafka-clients and kafka-streams bundles (2.8.1_1)
* repackage kafka-streams and make it a fragment for kafka-clients
* shade my bundle with kafka-clients and kafka-streams included

and every time fail with a class loading issue related to default
configuration template built inside kafka-streams, specifically:

*Caused by: org.apache.kafka.common.config.ConfigException: Invalid value
org.apache.kafka.streams.errors.LogAndFailExceptionHandler for
configuration default.deserialization.exception.handler: Class
org.apache.kafka.streams.errors.LogAndFailExceptionHandler could not be
found.*

Best regards

-- 
Daniel Łaś


Re: Multiple features versions

2022-03-16 Thread Daniel Las
Thank you. We will give it a try.

Regards

śr., 16 mar 2022 o 11:53 Jean-Baptiste Onofré  napisał(a):

> Hi Daniel,
>
> Most of the time, especially with docker/kubernetes/cloud, people create a
> complete full runtime with all features ready to go. Updating one feature
> means updating the runtime.
>
> Personally, I used the update approach quite a lot, and as soon as your
> features are "clean" (no circular dep, etc), it works fine (resolver).
>
> Regards
> JB
>
> On Wed, Mar 16, 2022 at 8:21 AM Daniel Las  wrote:
>
>> Hi,
>>
>> Thank you for the quick response. Yes, it is possible, we already tested
>> it but I'm curious if somebody else has tried this approach. I'm worried if
>> this will impact the resolver in any way like long upgrade time due to the
>> larger set of features for dependencies analysis.
>>
>> Regards
>> śr., 16 mar 2022 o 08:12 Jean-Baptiste Onofré 
>> napisał(a):
>>
>>> Hi Daniel
>>>
>>> Yes that’s possible. You can use feature:install -u (for update) that
>>> upgrade a feature from one version to another.
>>>
>>> Regards
>>> JB
>>>
>>> Le mer. 16 mars 2022 à 06:41, Daniel Las  a
>>> écrit :
>>>
>>>> Hi,
>>>>
>>>> We are testing provisioning via features. In our case, we are going
>>>> to install new custom features version by adding new features repository
>>>> into running Karaf instance quite frequently and upgrade selected features.
>>>> There will be many versions of the same feature after some time.
>>>>
>>>> I wonder if this approach is fine or it might cause some problems.
>>>>
>>>> Regards
>>>> --
>>>> Daniel Łaś
>>>>
>>>
>>
>> --
>> Daniel Łaś
>>
>

-- 
Daniel Łaś


Re: Multiple features versions

2022-03-16 Thread Daniel Las
Hi,

Thank you for the quick response. Yes, it is possible, we already tested it
but I'm curious if somebody else has tried this approach. I'm worried if
this will impact the resolver in any way like long upgrade time due to the
larger set of features for dependencies analysis.

Regards
śr., 16 mar 2022 o 08:12 Jean-Baptiste Onofré  napisał(a):

> Hi Daniel
>
> Yes that’s possible. You can use feature:install -u (for update) that
> upgrade a feature from one version to another.
>
> Regards
> JB
>
> Le mer. 16 mars 2022 à 06:41, Daniel Las  a
> écrit :
>
>> Hi,
>>
>> We are testing provisioning via features. In our case, we are going
>> to install new custom features version by adding new features repository
>> into running Karaf instance quite frequently and upgrade selected features.
>> There will be many versions of the same feature after some time.
>>
>> I wonder if this approach is fine or it might cause some problems.
>>
>> Regards
>> --
>> Daniel Łaś
>>
>

-- 
Daniel Łaś


Multiple features versions

2022-03-15 Thread Daniel Las
Hi,

We are testing provisioning via features. In our case, we are going
to install new custom features version by adding new features repository
into running Karaf instance quite frequently and upgrade selected features.
There will be many versions of the same feature after some time.

I wonder if this approach is fine or it might cause some problems.

Regards
-- 
Daniel Łaś


Configuration in Zookeeper

2021-05-24 Thread Daniel Las
Hi,

I'm looking for the centralized configuration management solution for
Apache Karaf 4.2.x.

I have multiple Karaf nodes and want to keep configuration in a single
place, preferably using Zookeeper as storage backend.  Some config
properties should be shared (the same on every node) and some should be
node dependent.

Is there any ready to use solution or should I go for a custom one?

Regards
-- 
Daniel Łaś


Re: Decanter and JMX

2021-05-03 Thread Daniel Las
Hi JB,

Thank you, I'll check with other appender.

Regards



pon., 3 maj 2021 o 06:14 Jean-Baptiste Onofre  napisał(a):

> Hi Daniel,
>
> JMX collector polls all MBeans attributes. However Prometheus appender
> only expose metrics (numeric) on the Prometheus servlet:
>
> http://localhost:8181/decanter/prometheus
>
> As the generated JMX JSON is "more" than just numeric, it’s possible that
> you don’t have the metrics.
>
> You can check the JMX JSON using another kind of appender (like log
> appender or elasticsearch).
> I can add kind of "json introspection" on the Prometheus appender to
> "force" some JSON fields as metrics (gauge).
>
> Regards
> JB
>
> > Le 2 mai 2021 à 22:24, Daniel Las  a écrit :
> >
> > Hi,
> >
> > I installed Decanter 2.7.0 on Karaf 4.2.11 with JMX collector and
> Prometheus appender features. I uncommented
> "object.name.system=java.lang:*" in
> org.apache.karaf.decanter.collector.jmx-local.cfg.
> >
> > Where can I find JVM metrics like current heap memory usage?
> >
> > Regards
> > --
> > Daniel Łaś
> >
>
>

-- 
Daniel Łaś


Decanter and JMX

2021-05-02 Thread Daniel Las
Hi,

I installed Decanter 2.7.0 on Karaf 4.2.11 with JMX collector and
Prometheus appender features. I uncommented
"object.name.system=java.lang:*"
in org.apache.karaf.decanter.collector.jmx-local.cfg.

Where can I find JVM metrics like current heap memory usage?

Regards
-- 
Daniel Łaś


Re: Native library in Karaf

2021-03-12 Thread Daniel Las
Thank you,

I just added the shared library file to the expected location inside the
bundle and it works now.

Regards

pt., 12 mar 2021 o 06:08 Jean-Baptiste Onofre  napisał(a):

> HI Daniel,
>
> If you are using bundle, you have the Bundle-NativeCode allowing to define
> the location of dll/so lib.
>
> It’s also possible to add at Karaf global level (java.library.path).
>
> Regards
> JB
>
> > Le 11 mars 2021 à 21:28, Daniel Las  a écrit :
> >
> > Hi,
> >
> > How can I make Karaf to see Netty native library?
> > I'm using Cassandra java driver complaining against missing
> libnetty_transport_native_epoll_x86_64.so
> >
> > 2021-03-11T21:23:04,069 | WARN  | FelixStartLevel  | NettyUtil
>   | 612 - com.datastax.driver.core - 3.10.2 | Found Netty's
> native epoll transport in the classpath, but epoll is not available. Using
> NIO instead.
> > java.lang.UnsatisfiedLinkError: could not load a native library:
> netty_transport_native_epoll_x86_64
> >
> > Best regards
> > --
> > Daniel Łaś
>
>

-- 
Daniel Łaś


Native library in Karaf

2021-03-11 Thread Daniel Las
Hi,

How can I make Karaf to see Netty native library?
I'm using Cassandra java driver complaining against
missing libnetty_transport_native_epoll_x86_64.so

2021-03-11T21:23:04,069 | WARN  | FelixStartLevel  | NettyUtil
   | 612 - com.datastax.driver.core - 3.10.2 | Found Netty's native
epoll transport in the classpath, but epoll is not available. Using NIO
instead.
java.lang.UnsatisfiedLinkError: could not load a native library:
netty_transport_native_epoll_x86_64

Best regards
--
Daniel Łaś


Pax JDBC pooling configuration

2021-02-24 Thread Daniel Las
Hi,

I'm using PostgreSQL datasource and pax-jdbc-pool-dbcp2. I'm struggling to
find good documentation of datasource pooling configuration. Where can I
find one?

Best regards
-- 
Daniel Łaś
CTO at Empirica S.A.
+48 695 616181


Karaf in Docker - JVM arguments

2021-02-09 Thread Daniel Las
Hi,

How can I set JVM arguments of custom distribution running in Docker? I
mean GC or memory settings.

REgards
-- 
Daniel Łaś
CTO at Empirica S.A.
+48 695 616181


Re: Karaf in docker - JMX access

2021-02-09 Thread Daniel Las
Thanks a lot JB, it works now.

Best regards

wt., 9 lut 2021 o 16:27 Jean-Baptiste Onofre  napisał(a):

> Be careful, by default, JMX is bound to localhost (not 0.0.0.0), so not
> visible outside.
>
> I mean by default, in etc/org.apache.karaf.management.cfg, you have:
>
> rmiRegistryHost = 127.0.0.1
> rmiServerHost = 127.0.0.1
>
> Can you try with 0.0.0.0 here instead of localhost ?
>
> Regards
> JB
>
> Le 9 févr. 2021 à 16:08, Daniel Las  a écrit :
>
> HI,
>
> I started the container using bare 4.3.0 image pulled from Docker hub:
>
> docker run -p 1099:1099 -p 4:4 apache/karaf:4.3.0
>
> This is output from Docker ps command:
>
> e5492ba6143aapache/karaf:4.3.0
> "karaf run"  15 seconds ago  Up 13 seconds
> 8101/tcp, 0.0.0.0:1099->1099/tcp, 0.0.0.0:4->4/tcp, 8181/tcp
> blissful_mahavira
>
> When I try to connect from Visual VM, there are errors logged:
>
> 14:55:56.962 WARN  [RMI TCP Accept-4] RMI TCP Accept-4: accept
> loop for ServerSocket[addr=0.0.0.0/0.0.0.0,localport=4] throws
> java.io.IOException: Only connections from clients running on the host
> where the RMI remote objects have been exported are accepted.
> at
> org.apache.karaf.management.ConnectorServerFactory.checkLocal(ConnectorServerFactory.java:900)
> at
> org.apache.karaf.management.ConnectorServerFactory.access$000(ConnectorServerFactory.java:67)
> at
> org.apache.karaf.management.ConnectorServerFactory$LocalOnlyServerSocket.accept(ConnectorServerFactory.java:646)
> at
> java.rmi/sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(Unknown
> Source)
> at java.rmi/sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(Unknown
> Source)
> at java.base/java.lang.Thread.run(Unknown Source)
>
> I gave the JConsole a try, it fails to connect as well. Every time I try
> to connect, the above exception is logged.
>
> Regards
>
> wt., 9 lut 2021 o 14:15 Jean-Baptiste Onofre  napisał(a):
>
>> And you set both registry and transport ports ?
>>
>> It’s seems that the 4 is not bound.
>>
>> What’s the service URL you have in etc/org.apache.karaf.management.cfg ?
>>
>> Regards
>> JB
>>
>> Le 9 févr. 2021 à 14:01, Daniel Las  a écrit :
>>
>> Hi,
>>
>> Yes, I did.
>>
>> Regards
>>
>> wt., 9 lut 2021 o 13:59 Jean-Baptiste Onofre 
>> napisał(a):
>>
>>> Hi,
>>>
>>> Did you bind the JMX ports on docker ? (Like docker run -p 1099:1099 -p
>>> 4:4 …)
>>>
>>> Regards
>>> JB
>>>
>>> Le 9 févr. 2021 à 13:27, Daniel Las  a écrit :
>>>
>>> Hi,
>>>
>>> I'm running a custom distribution based on Karaf 4.2.9 in docker. How
>>> should I configure it to allow JMX access?
>>>
>>> I tried to set host IP in rmiRegistryHost and rmiServerHost
>>> in org.apache.karaf.management.cfg but I can't connect to it from JMX
>>> client. Without it I see errors in log when JMX client attempts to connect:
>>>
>>> 2021-02-09T12:12:17,275 | WARN  | RMI TCP Accept-4 | tcp
>>>  | 3 - org.ops4j.pax.logging.pax-logging-api - 1.11.7 | RMI
>>> TCP Accept-4: accept loop for ServerSocket[addr=
>>> 0.0.0.0/0.0.0.0,localport=4] throws
>>> java.io.IOException: Only connections from clients running on the host
>>> where the RMI remote objects have been exported are accepted.
>>> at
>>> org.apache.karaf.management.ConnectorServerFactory.checkLocal(ConnectorServerFactory.java:900)
>>> ~[?:?]
>>> at
>>> org.apache.karaf.management.ConnectorServerFactory.access$000(ConnectorServerFactory.java:67)
>>> ~[?:?]
>>> at
>>> org.apache.karaf.management.ConnectorServerFactory$LocalOnlyServerSocket.accept(ConnectorServerFactory.java:646)
>>> ~[?:?]
>>> at
>>> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(Unknown
>>> Source) [?:?]
>>> at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(Unknown Source)
>>> [?:?]
>>> at java.lang.Thread.run(Unknown Source) [?:?]
>>>
>>> Best regards
>>> --
>>> Daniel Łaś
>>> CTO at Empirica S.A.
>>> +48 695 616181
>>>
>>>
>>>
>>
>> --
>> Daniel Łaś
>> CTO at Empirica S.A.
>> +48 695 616181
>>
>>
>>
>
> --
> Daniel Łaś
> CTO at Empirica S.A.
> +48 695 616181
>
>
>

-- 
Daniel Łaś
CTO at Empirica S.A.
+48 695 616181


Re: Karaf in docker - JMX access

2021-02-09 Thread Daniel Las
HI,

I started the container using bare 4.3.0 image pulled from Docker hub:

docker run -p 1099:1099 -p 4:4 apache/karaf:4.3.0

This is output from Docker ps command:

e5492ba6143aapache/karaf:4.3.0
"karaf run"  15 seconds ago  Up 13 seconds
8101/tcp, 0.0.0.0:1099->1099/tcp, 0.0.0.0:4->4/tcp, 8181/tcp
blissful_mahavira

When I try to connect from Visual VM, there are errors logged:

14:55:56.962 WARN  [RMI TCP Accept-4] RMI TCP Accept-4: accept loop
for ServerSocket[addr=0.0.0.0/0.0.0.0,localport=4] throws
java.io.IOException: Only connections from clients running on the host
where the RMI remote objects have been exported are accepted.
at
org.apache.karaf.management.ConnectorServerFactory.checkLocal(ConnectorServerFactory.java:900)
at
org.apache.karaf.management.ConnectorServerFactory.access$000(ConnectorServerFactory.java:67)
at
org.apache.karaf.management.ConnectorServerFactory$LocalOnlyServerSocket.accept(ConnectorServerFactory.java:646)
at
java.rmi/sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(Unknown
Source)
at java.rmi/sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(Unknown
Source)
at java.base/java.lang.Thread.run(Unknown Source)

I gave the JConsole a try, it fails to connect as well. Every time I try to
connect, the above exception is logged.

Regards

wt., 9 lut 2021 o 14:15 Jean-Baptiste Onofre  napisał(a):

> And you set both registry and transport ports ?
>
> It’s seems that the 4 is not bound.
>
> What’s the service URL you have in etc/org.apache.karaf.management.cfg ?
>
> Regards
> JB
>
> Le 9 févr. 2021 à 14:01, Daniel Las  a écrit :
>
> Hi,
>
> Yes, I did.
>
> Regards
>
> wt., 9 lut 2021 o 13:59 Jean-Baptiste Onofre  napisał(a):
>
>> Hi,
>>
>> Did you bind the JMX ports on docker ? (Like docker run -p 1099:1099 -p
>> 4:4 …)
>>
>> Regards
>> JB
>>
>> Le 9 févr. 2021 à 13:27, Daniel Las  a écrit :
>>
>> Hi,
>>
>> I'm running a custom distribution based on Karaf 4.2.9 in docker. How
>> should I configure it to allow JMX access?
>>
>> I tried to set host IP in rmiRegistryHost and rmiServerHost
>> in org.apache.karaf.management.cfg but I can't connect to it from JMX
>> client. Without it I see errors in log when JMX client attempts to connect:
>>
>> 2021-02-09T12:12:17,275 | WARN  | RMI TCP Accept-4 | tcp
>>  | 3 - org.ops4j.pax.logging.pax-logging-api - 1.11.7 | RMI
>> TCP Accept-4: accept loop for ServerSocket[addr=
>> 0.0.0.0/0.0.0.0,localport=4] throws
>> java.io.IOException: Only connections from clients running on the host
>> where the RMI remote objects have been exported are accepted.
>> at
>> org.apache.karaf.management.ConnectorServerFactory.checkLocal(ConnectorServerFactory.java:900)
>> ~[?:?]
>> at
>> org.apache.karaf.management.ConnectorServerFactory.access$000(ConnectorServerFactory.java:67)
>> ~[?:?]
>> at
>> org.apache.karaf.management.ConnectorServerFactory$LocalOnlyServerSocket.accept(ConnectorServerFactory.java:646)
>> ~[?:?]
>> at
>> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(Unknown
>> Source) [?:?]
>> at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(Unknown Source) [?:?]
>> at java.lang.Thread.run(Unknown Source) [?:?]
>>
>> Best regards
>> --
>> Daniel Łaś
>> CTO at Empirica S.A.
>> +48 695 616181
>>
>>
>>
>
> --
> Daniel Łaś
> CTO at Empirica S.A.
> +48 695 616181
>
>
>

-- 
Daniel Łaś
CTO at Empirica S.A.
+48 695 616181


Re: Karaf in docker - JMX access

2021-02-09 Thread Daniel Las
Hi,

Yes, I did.

Regards

wt., 9 lut 2021 o 13:59 Jean-Baptiste Onofre  napisał(a):

> Hi,
>
> Did you bind the JMX ports on docker ? (Like docker run -p 1099:1099 -p
> 4:4 …)
>
> Regards
> JB
>
> Le 9 févr. 2021 à 13:27, Daniel Las  a écrit :
>
> Hi,
>
> I'm running a custom distribution based on Karaf 4.2.9 in docker. How
> should I configure it to allow JMX access?
>
> I tried to set host IP in rmiRegistryHost and rmiServerHost
> in org.apache.karaf.management.cfg but I can't connect to it from JMX
> client. Without it I see errors in log when JMX client attempts to connect:
>
> 2021-02-09T12:12:17,275 | WARN  | RMI TCP Accept-4 | tcp
>| 3 - org.ops4j.pax.logging.pax-logging-api - 1.11.7 | RMI
> TCP Accept-4: accept loop for ServerSocket[addr=
> 0.0.0.0/0.0.0.0,localport=4] throws
> java.io.IOException: Only connections from clients running on the host
> where the RMI remote objects have been exported are accepted.
> at
> org.apache.karaf.management.ConnectorServerFactory.checkLocal(ConnectorServerFactory.java:900)
> ~[?:?]
> at
> org.apache.karaf.management.ConnectorServerFactory.access$000(ConnectorServerFactory.java:67)
> ~[?:?]
> at
> org.apache.karaf.management.ConnectorServerFactory$LocalOnlyServerSocket.accept(ConnectorServerFactory.java:646)
> ~[?:?]
> at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(Unknown
> Source) [?:?]
> at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(Unknown Source) [?:?]
> at java.lang.Thread.run(Unknown Source) [?:?]
>
> Best regards
> --
> Daniel Łaś
> CTO at Empirica S.A.
> +48 695 616181
>
>
>

-- 
Daniel Łaś
CTO at Empirica S.A.
+48 695 616181


Karaf in docker - JMX access

2021-02-09 Thread Daniel Las
Hi,

I'm running a custom distribution based on Karaf 4.2.9 in docker. How
should I configure it to allow JMX access?

I tried to set host IP in rmiRegistryHost and rmiServerHost
in org.apache.karaf.management.cfg but I can't connect to it from JMX
client. Without it I see errors in log when JMX client attempts to connect:

2021-02-09T12:12:17,275 | WARN  | RMI TCP Accept-4 | tcp
   | 3 - org.ops4j.pax.logging.pax-logging-api - 1.11.7 | RMI
TCP Accept-4: accept loop for ServerSocket[addr=
0.0.0.0/0.0.0.0,localport=4] throws
java.io.IOException: Only connections from clients running on the host
where the RMI remote objects have been exported are accepted.
at
org.apache.karaf.management.ConnectorServerFactory.checkLocal(ConnectorServerFactory.java:900)
~[?:?]
at
org.apache.karaf.management.ConnectorServerFactory.access$000(ConnectorServerFactory.java:67)
~[?:?]
at
org.apache.karaf.management.ConnectorServerFactory$LocalOnlyServerSocket.accept(ConnectorServerFactory.java:646)
~[?:?]
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(Unknown
Source) [?:?]
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(Unknown Source) [?:?]
at java.lang.Thread.run(Unknown Source) [?:?]

Best regards
-- 
Daniel Łaś
CTO at Empirica S.A.
+48 695 616181


Re: [PROPOSAL] Disable autoRefresh on features service by default and simple optional features service

2021-01-08 Thread Daniel Las
Hi,

Don't we have Karaf version 4.3.0 with autoRefresh=true by default and you
propose to change autoRefresh=false by default in 4.3.x ?

Regards

pt., 8 sty 2021 o 08:38 Jean-Baptiste Onofre  napisał(a):

> Hi,
>
> I guess you didn’t read fully my message ;)
>
> My proposal is:
>
> - introduce the property and keep "true" (as it is today) on Karaf 4.2.x
> - introduce the property and set to "false" (change) on Karaf 4.3.x.
>
> Regards
> JB
>
> Le 8 janv. 2021 à 08:16, Daniel Las  a écrit :
>
> Hi,
>
> It looks like some kind of backward incompatible change introduced within
> patch version change. I personally would like to keep auto refresh "on" by
> default as this is expected/desired behavior for me.
>
> Regards
>
> pt., 8 sty 2021 o 07:31 Jean-Baptiste Onofre  napisał(a):
>
>> Hi everyone,
>>
>> We got several user feedback, complaining about unexpected and cascaded
>> (unrelated) refresh while installing features.
>>
>> As reminder, a refresh can happen when:
>> - bundle A imports package foo:1 and a bundle provides newer foo package
>> version. In that case, the features service will refresh A to use the
>> newest package version.
>> - bundle A has an optional import to package foo and a bundle provides
>> this package. In that case, the features service will refresh A to actually
>> use the import as it’s a "resolved" optional.
>> - bundle A is wired to bundle B (from a package perspective or
>> requirement) and B is refreshed. In that case, the features service will
>> refresh A as B is itself refreshed (for the previous reasons for instance).
>> This can cause "cascading" refresh.
>>
>> A refresh means that a bundle can be restarted (if the bundle contains an
>> activator or similar (DS component, blueprint bundle)).
>>
>> In this PR https://github.com/apache/karaf/pull/1287, I propose to
>> introduce a new property autoRefresh in etc/org.apache.karaf.features.cfg
>> to disable the auto refresh by the features service (and let the user
>> decides when he wants to trigger refresh with bundle:refresh command for
>> instance).
>> I propose to keep autoRefresh=true on 4.2.x and turn autoRefresh=false on
>> 4.3.x.
>>
>> Thoughts ?
>>
>> On the other hand (and to prepare the "path" to Karaf5), I have created a
>> new "simple features service" (PR will be open soon) that:
>>
>> - just take the features definition in order (ignoring start level)
>> - ignore requirement/capability (no resolver)
>> - no auto refresh
>>
>> Basically, if you have the following feature definition:
>>
>> 
>>   bar
>>  A
>>  B
>> 
>>
>> The features service will fully install/start bar feature first, then
>> bundle A, then bundle B.
>> To use this "simple features services, you just have to replace
>> org.apache.karaf.features.core by org.apache.karaf.features.simple bundle
>> in etc/startup.properties (or custom distribution).
>>
>> It’s similar to the Karaf 5 extension behavior (I will share complete
>> details about Karaf 5 and its concepts (module, extension, …) very soon,
>> but that’s another thread ;)).
>>
>> The big advantages of this approach is:
>> - predictable/deterministic provisioning (if it works fine, it works
>> again)
>> - faster deployment (I estimated the gain to about 70%)
>>
>> Thoughts ?
>>
>> If you agree, I will move forward on both tasks.
>>
>> Thanks,
>> Regards
>> JB
>>
>
>
> --
> Daniel Łaś
> CTO at Empirica S.A.
> +48 695 616181
>
>
>

-- 
Daniel Łaś
CTO at Empirica S.A.
+48 695 616181


Re: [PROPOSAL] Disable autoRefresh on features service by default and simple optional features service

2021-01-07 Thread Daniel Las
Hi,

It looks like some kind of backward incompatible change introduced within
patch version change. I personally would like to keep auto refresh "on" by
default as this is expected/desired behavior for me.

Regards

pt., 8 sty 2021 o 07:31 Jean-Baptiste Onofre  napisał(a):

> Hi everyone,
>
> We got several user feedback, complaining about unexpected and cascaded
> (unrelated) refresh while installing features.
>
> As reminder, a refresh can happen when:
> - bundle A imports package foo:1 and a bundle provides newer foo package
> version. In that case, the features service will refresh A to use the
> newest package version.
> - bundle A has an optional import to package foo and a bundle provides
> this package. In that case, the features service will refresh A to actually
> use the import as it’s a "resolved" optional.
> - bundle A is wired to bundle B (from a package perspective or
> requirement) and B is refreshed. In that case, the features service will
> refresh A as B is itself refreshed (for the previous reasons for instance).
> This can cause "cascading" refresh.
>
> A refresh means that a bundle can be restarted (if the bundle contains an
> activator or similar (DS component, blueprint bundle)).
>
> In this PR https://github.com/apache/karaf/pull/1287, I propose to
> introduce a new property autoRefresh in etc/org.apache.karaf.features.cfg
> to disable the auto refresh by the features service (and let the user
> decides when he wants to trigger refresh with bundle:refresh command for
> instance).
> I propose to keep autoRefresh=true on 4.2.x and turn autoRefresh=false on
> 4.3.x.
>
> Thoughts ?
>
> On the other hand (and to prepare the "path" to Karaf5), I have created a
> new "simple features service" (PR will be open soon) that:
>
> - just take the features definition in order (ignoring start level)
> - ignore requirement/capability (no resolver)
> - no auto refresh
>
> Basically, if you have the following feature definition:
>
> 
>   bar
>  A
>  B
> 
>
> The features service will fully install/start bar feature first, then
> bundle A, then bundle B.
> To use this "simple features services, you just have to replace
> org.apache.karaf.features.core by org.apache.karaf.features.simple bundle
> in etc/startup.properties (or custom distribution).
>
> It’s similar to the Karaf 5 extension behavior (I will share complete
> details about Karaf 5 and its concepts (module, extension, …) very soon,
> but that’s another thread ;)).
>
> The big advantages of this approach is:
> - predictable/deterministic provisioning (if it works fine, it works again)
> - faster deployment (I estimated the gain to about 70%)
>
> Thoughts ?
>
> If you agree, I will move forward on both tasks.
>
> Thanks,
> Regards
> JB
>


-- 
Daniel Łaś
CTO at Empirica S.A.
+48 695 616181


Re: [X-Mas Gift] Panel discussion about Karaf 5

2020-12-17 Thread Daniel Las
Thank you JB, I'm waiting for the schedule proposal then.


Regarards

czw., 17 gru 2020 o 09:07 Jean-Baptiste Onofre  napisał(a):

> Hi Daniel,
>
> I don’t know yet. My first intention was to register a session and share
> with you. However, I received several people who wants to live chat. So I’m
> checking between work and off agenda to find some slots to propose to you.
>
> It might be a first introduction email before Christmas (for people who
> replied to my email) and then a live session (google meet for instance).
>
> Regards
> JB
>
> Le 17 déc. 2020 à 07:45, Daniel Las  a écrit :
>
> Hi,
>
> I'd like to attend too. What's the planned schedule?
>
> Regards
>
> wt., 15 gru 2020 o 18:32 Jean-Baptiste Onofre 
> napisał(a):
>
>> Hi guys,
>>
>> Maybe some of you know that I started to work on Karaf 5.
>>
>> I have something that it’s almost "usable".
>>
>> Before sending a global discussion thread on the mailing list, I would
>> like to evaluate the ideas & big changes I did.
>>
>> I would like to know if some of you would be interested by a panel
>> discussion call to introduce Karaf 5 (limited audience at first step).
>>
>> The agenda of this call would be:
>> 1. Pros/Cons about Karaf as it is today
>> 2. Concepts in Karaf 5 (module, extension, …)
>> 3. Building & running
>> 4. Live demo
>>
>> It could be recorded/webinar style (not necessary live call) for about 20
>> people at first step (both Karaf developers and users).
>> The purpose is to evaluate the direction.
>>
>> Thoughts ?
>> Who would be interested ?
>>
>> Thanks,
>> Regards
>> JB
>
>
>
> --
> Daniel Łaś
> CTO at Empirica S.A.
> +48 695 616181
>
>
>

-- 
Daniel Łaś
CTO at Empirica S.A.
+48 695 616181


Re: [X-Mas Gift] Panel discussion about Karaf 5

2020-12-16 Thread Daniel Las
Hi,

I'd like to attend too. What's the planned schedule?

Regards

wt., 15 gru 2020 o 18:32 Jean-Baptiste Onofre  napisał(a):

> Hi guys,
>
> Maybe some of you know that I started to work on Karaf 5.
>
> I have something that it’s almost "usable".
>
> Before sending a global discussion thread on the mailing list, I would
> like to evaluate the ideas & big changes I did.
>
> I would like to know if some of you would be interested by a panel
> discussion call to introduce Karaf 5 (limited audience at first step).
>
> The agenda of this call would be:
> 1. Pros/Cons about Karaf as it is today
> 2. Concepts in Karaf 5 (module, extension, …)
> 3. Building & running
> 4. Live demo
>
> It could be recorded/webinar style (not necessary live call) for about 20
> people at first step (both Karaf developers and users).
> The purpose is to evaluate the direction.
>
> Thoughts ?
> Who would be interested ?
>
> Thanks,
> Regards
> JB



-- 
Daniel Łaś
CTO at Empirica S.A.
+48 695 616181


Re: Variables in features.xml

2020-11-09 Thread Daniel Las
Hi,

I made it working by:

* adding "wrapper.java.additional.[some_number]=-Dvariable=value" line to
etc/karaf-wrapper.conf
* using ${variable} placeholder in  block of features.xml file

It is good enough for me. Thanks again for help.

Best regards
Daniel Łaś


pon., 9 lis 2020 o 14:48 Jean-Baptiste Onofre  napisał(a):

> Please, let me know if you have any issue, I will double check.
>
> Regards
> JB
>
> Le 9 nov. 2020 à 14:41, Daniel Las  a écrit :
>
> Thank you very much, that was fast :)
>
> Regards
> Daniel Łaś
>
>
> pon., 9 lis 2020 o 14:26 Jean-Baptiste Onofre 
> napisał(a):
>
>> Hi,
>>
>> You can use ${env…} for instance.
>>
>> Regards
>> JB
>>
>> > Le 9 nov. 2020 à 14:16, Daniel Las  a écrit :
>> >
>> > Hi,
>> >
>> > We consider features based deployment in Karaf. Some of our bundles
>> require some paths to be configured. Is there a possibility to use
>> variables placeholders in features.xml? For example:
>> >
>> > 
>> >   
>> > location=${location.from.variable}
>> >   
>> > 
>> >
>> > where ${location.from.variable} would vary between installations and
>> could be provided via environment variable or some other configuration?
>> >
>> > Best regards
>> > --
>> > Daniel Łaś
>> >
>>
>>
>


Re: Variables in features.xml

2020-11-09 Thread Daniel Las
Thank you very much, that was fast :)

Regards
Daniel Łaś


pon., 9 lis 2020 o 14:26 Jean-Baptiste Onofre  napisał(a):

> Hi,
>
> You can use ${env…} for instance.
>
> Regards
> JB
>
> > Le 9 nov. 2020 à 14:16, Daniel Las  a écrit :
> >
> > Hi,
> >
> > We consider features based deployment in Karaf. Some of our bundles
> require some paths to be configured. Is there a possibility to use
> variables placeholders in features.xml? For example:
> >
> > 
> >   
> > location=${location.from.variable}
> >   
> > 
> >
> > where ${location.from.variable} would vary between installations and
> could be provided via environment variable or some other configuration?
> >
> > Best regards
> > --
> > Daniel Łaś
> >
>
>


Variables in features.xml

2020-11-09 Thread Daniel Las
Hi,

We consider features based deployment in Karaf. Some of our bundles require
some paths to be configured. Is there a possibility to use variables
placeholders in features.xml? For example:


  
location=${location.from.variable}
  


where ${location.from.variable} would vary between installations and could
be provided via environment variable or some other configuration?

Best regards
-- 
Daniel Łaś