Thanks JB to bring me some light.

I was just wondering. I don't want to start a discussion/troll too :-).
Anyway, it's a very good idea. It can be a very good alternative and it can
improve both projects !

I say +1 but I'm not sure if my vote will be taken into account :-).

Cheers

2014-10-14 20:59 GMT+02:00 Jamie G. <jamie.goody...@gmail.com>:

> Thank you JB for the description, sounds very interesting.
>
> +1 as a subproject idea, nice name choice too :)
>
> Cheers,
> Jamie
>
> On Tue, Oct 14, 2014 at 3:54 PM, Achim Nierbeck <bcanh...@googlemail.com>
> wrote:
> > Hi JB,
> >
> > This has been a very nice and detailed description.
> > I like it right away, so +1
> > For calling it decanter and as extra subproject.
> >
> > Regards, Achim
> >
> > sent from mobile device
> >
> > Am 14.10.2014 17:13 schrieb "Jean-Baptiste Onofré" <j...@nanthrax.net>:
> >
> >> Hi all,
> >>
> >> First of all, sorry for this long e-mail ;)
> >>
> >> Some weeks ago, I blogged about the usage of ELK
> >> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
> provide
> >> a monitoring dashboard (know what's happen in Karaf and be able to
> store it
> >> for a long period):
> >>
> >>
> >>
> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
> >>
> >> If this solution works fine, there are some drawbacks:
> >> - it requires additional middlewares on the machines. Additionally to
> >> Karaf itself, we have to install logstash, elasticsearch nodes, and
> kibana
> >> console
> >> - it's not usable "out of the box": you need at least to configure
> >> logstash (with the different input/output plugins), kibana (to create
> the
> >> dashboard that you need)
> >> - it doesn't cover all the monitoring needs, especially in term of SLA:
> we
> >> want to be able to raise some alerts depending of some events (for
> instance,
> >> when a regex is match in the log messages, when a feature is
> uninstalled,
> >> when a JMX metric is greater than a given value, etc)
> >>
> >> Actually, Karaf (and related projects) already provides most (all) data
> >> required for the monitoring. However, it would be very helpful to have a
> >> "glue", ready to use and more user friendly, including a storage of the
> >> metrics/monitoring data.
> >>
> >> Regarding this, I started a prototype of a monitoring solution for Karaf
> >> and the applications running in Karaf.
> >> The purpose is to be very extendible, flexible, easy to install and use.
> >>
> >> In term of architecture, we can find the following component:
> >>
> >> 1/ Collectors & SLA Policies
> >> The collectors are services responsible of harvesting monitoring data.
> >> We have two kinds of collectors:
> >> - the polling collectors are invoked by a scheduler periodically.
> >> - the event driven collectors react to some events.
> >> Two collectors are already available:
> >> - the JMX collector is a polling collector which harvest all MBeans
> >> attributes
> >> - the Log collector is a event driven collector, implementing a
> >> PaxAppender which react when a log message occurs
> >> We can planned the following collectors:
> >> - a Camel Tracer collector would be an event driven collector, acting
> as a
> >> Camel Interceptor. It would allow to trace any Exchange in Camel.
> >>
> >> It's very dynamic (thanks to OSGi services), so it's possible to add a
> new
> >> custom collector (user/custom implementation).
> >>
> >> The Collectors are also responsible of checking the SLA. As the SLA
> >> policies are tight to the collected data, it makes sense that the
> collector
> >> validates the SLA and call/delegate the alert to SLA services.
> >>
> >> 2/ Scheduler
> >> The scheduler service is responsible to call the Polling Collectors,
> >> gather the harvested data, and delegate to the dispatcher.
> >> We already have a simple scheduler (just a thread), but we can plan a
> >> quartz scheduler (for advanced cron/trigger configuration), and another
> one
> >> leveraging the Karaf scheduler.
> >>
> >> 3/ Dispatcher
> >> The dispatcher is called by the scheduler or the event driven collectors
> >> to dispatch the collected data to the appenders.
> >>
> >> 4/ Appenders
> >> The appender services are responsible to send/store the collected data
> to
> >> target systems.
> >> For now, we have two appenders:
> >> - a log appender which just log the collected data
> >> - a elasticsearch appender which send the collected data to a
> >> elasticsearch instance. For now, it uses "external" elasticsearch, but
> I'm
> >> working on an elasticsearch feature allowing to embed elasticsearch in
> Karaf
> >> (it's mostly done).
> >> We can plan the following other appenders:
> >> - redis to send the collected data in Redis messaging system
> >> - jdbc to store the collected data in a database
> >> - jms to send the collected data to a JMS broker (like ActiveMQ)
> >> - camel to send the collected data to a Camel direct-vm/vm endpoint of a
> >> route (it would create an internal route)
> >>
> >> 5/ Console/Kibana
> >> The console is composed by two parts:
> >> - a angularjs or bootstrap layer allowing to configure the SLA and
> global
> >> settings
> >> - embedded kibana instance with pre-configured dashboard (when the
> >> elasticsearch appender is used). We will have a set of already created
> >> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
> template.
> >> The kibana instance will be embedded in Karaf (not external).
> >>
> >> Of course, we have ready to use features, allowing to very easily
> install
> >> modules that we want.
> >>
> >> I named the prototype Karaf Decanter. I don't have preference about the
> >> name, and the location of the code (it could be as Karaf subproject like
> >> Cellar or Cave, or directly in the Karaf codebase).
> >>
> >> Thoughts ?
> >>
> >> Regards
> >> JB
> >> --
> >> Jean-Baptiste Onofré
> >> jbono...@apache.org
> >> http://blog.nanthrax.net
> >> Talend - http://www.talend.com
>

Reply via email to