How about spilt them?
For event trace data (about job execute timestamp and result), persist into
MySQL or other databases, for metrics, persist into Prometheus.

------------------

Liang Zhang (John)
Apache ShardingSphere & Dubbo


Sheng Wu <[email protected]> 于2020年6月15日周一 下午2:02写道:

> What event trace data? Prometheus is focusing on metrics only. Only
> statistic data is suitable.
>
> Sheng Wu 吴晟
> Twitter, wusheng1108
>
>
> [email protected] <[email protected]> 于2020年6月15日周一 下午1:34写道:
>
> > Hi Sheng,
> >
> > The mean of `Support prometheus storage` is putting the event trace data
> > info prometheus ecosystem, we can use push gateway to collect the data
> and
> > use grafana to display the result.
> >
> > ------------------
> >
> > Liang Zhang (John)
> > Apache ShardingSphere & Dubbo
> >
> >
> > Sheng Wu <[email protected]> 于2020年6月15日周一 上午9:26写道:
> >
> > > Hi Liang
> > >
> > > About SkyWalking, that is another topic. It is fine to do that later.
> > > My question is, what do you mean `Support prometheus storage`.
> Prometheus
> > > is a monitoring system, basically. What does it storage you mean?
> > >
> > > Sheng Wu 吴晟
> > > Twitter, wusheng1108
> > >
> > >
> > > [email protected] <[email protected]> 于2020年6月13日周六 上午1:28写道:
> > >
> > > > For persist job event trace. We can consider about use SkyWalking to
> > > trace
> > > > the job event, is it possible?
> > > >
> > > > ------------------
> > > >
> > > > Liang Zhang (John)
> > > > Apache ShardingSphere & Dubbo
> > > >
> > > >
> > > > Sheng Wu <[email protected]> 于2020年6月12日周五 下午4:54写道:
> > > >
> > > > > [email protected] <[email protected]> 于2020年6月12日周五
> > 下午4:24写道:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > We are beginning to develop ElasticJob, and want to add some new
> > > > features
> > > > > > in version 3.0.
> > > > > >
> > > > > > I want discuss the feature of it.
> > > > > >
> > > > > > Some ideas for new features:
> > > > > >
> > > > > > - Support one time trigger for ElasticJob lite
> > > > > > - Support job dependency based one sharding item
> > > > > > - Use native ZooKeeper API instead of curator
> > > > > >
> > > > > > Some ideas for new architecture:
> > > > > >
> > > > > > - Refactor Job API, just keep SimpleJob, make other job types
> > > introduce
> > > > > via
> > > > > > SPI
> > > > > > - Redesign Job domain, make job as top level interface, which
> > > composite
> > > > > by
> > > > > > task (sharding item)
> > > > > > - Refactor event trace module
> > > > > >     - Split event trace and elasticjob-core module
> > > > > >     - Open SPI for event trace data storage
> > > > > >     - Support prometheus storage
> > > > > >
> > > > >
> > > > > What is Prometheus storage?
> > > > >
> > > > >
> > > > > Sheng Wu 吴晟
> > > > > Twitter, wusheng1108
> > > > >
> > > > >
> > > > > >
> > > > > > ------------------
> > > > > >
> > > > > > Liang Zhang (John)
> > > > > > Apache ShardingSphere & Dubbo
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to