I have a use case that I need to continuously ingest data from Kafka
stream. However apart from ingestion (to HBase), I also need to compute
some metrics (i.e. avg for last min, etc.).

The problem is that it's very likely I'll continuously add more metrics and
I don't want to restart my spark program from time to time.

Is there a mechanism that Spark stream can load and plugin code in runtime
without restarting?

Any solutions or suggestions?

Thanks,
-- 
Jianshi Huang

LinkedIn: jianshi
Twitter: @jshuang
Github & Blog: http://huangjs.github.com/

Reply via email to