pflat.com> Cc: user <user@spark.apache.org>, Spark Dev List
<d...@spark.apache.org> Subject: Re: Dependency Injection and Microservice
development with Spark
Lars,
Thank you, I want to use DI for configuring all the properties (wiring) for
below architectural approach.
Oracle ->
Hi,
another nice approach is to use instead of it Reader monad and some
framework to support this approach (e.g. Grafter -
https://github.com/zalando/grafter). It's lightweight and helps a bit with
dependencies issues.
2016-12-28 22:55 GMT+01:00 Lars Albertsson :
> Do you
Lars,
Thank you, I want to use DI for configuring all the properties (wiring) for
below architectural approach.
Oracle -> Kafka Batch (Event Queuing) -> Spark Jobs( Incremental load from
HBase -> Hive with Transformation) -> Spark Transformation -> PostgreSQL
Thanks.
On Thu, Dec 29, 2016 at
Adding to Lars Albertsson & Miguel Morales, I am hoping to see how
well scalameta would branch down into support for macros that can rid away
sizable DI problems and for the reminder having a class type as args as Miguel
Morales mentioned.
Thanks,
On Wed, Dec 28, 2016 at 6:41 PM, Miguel Morales
Hi
Not sure about Spring boot but trying to use DI libraries you'll run into
serialization issues.I've had luck using an old version of Scaldi.
Recently though I've been passing the class types as arguments with default
values. Then in the spark code it gets instantiated. So you're
Do you really need dependency injection?
DI is often used for testing purposes. Data processing jobs are easy
to test without DI, however, due to their functional and synchronous
nature. Hence, DI is often unnecessary for testing data processing
jobs, whether they are batch or streaming jobs.
Or
Hello Community,
Current approach I am using for Spark Job Development with Scala + SBT and
Uber Jar with yml properties file to pass configuration parameters. But If
i would like to use Dependency Injection and MicroService Development like
Spring Boot feature in Scala then what would be the