Nothing about Spark depends on a cluster. The Hadoop client libs are
required as they are part of the API but there is no need to remove that if
you aren't using YARN. Indeed you can't but they're just libs.
On Sun, Nov 12, 2017, 9:36 PM wrote:
> @Jörn Spark without Hadoop is useful
>
>- For
Within in a CI/CD pipeline I use MiniDFSCluster and MiniYarnCluster if the
production cluster has also HDFS and Yarn - it has been proven as extremely
useful and caught a lot of errors before going to the cluster (ie saves a lot
of money).
Cf. https://wiki.apache.org/hadoop/HowToDevelopUnitTest
@Jörn Spark without Hadoop is useful
- For using sparks programming model on a single beefy instance
- For testing and integrating with a CI/CD pipeline.
It's ugly to have tests which depend on a cluster running somewhere.
On Sun, 12 Nov 2017 at 17:17 Jörn Franke wrote:
> Why do you eve
Why do you even mind?
> On 11. Nov 2017, at 18:42, Cristian Lorenzetto
> wrote:
>
> Considering the case i neednt hdfs, it there a way for removing completely
> hadoop from spark?
> Is YARN the unique dependency in spark?
> is there no java or scala (jdk langs)YARN-like lib to embed in a proj
Hey Cristian,
You don’t need to remove anything. Spark has a standalone mode. Actually that’s
the default. https://spark.apache.org/docs/latest/spark-standalone.html
When building Spark (and you should build it yourself), just use the option
that suits you: https://spark.apache.org/docs/latest/
Considering the case i neednt hdfs, it there a way for removing completely
hadoop from spark?
Is YARN the unique dependency in spark?
is there no java or scala (jdk langs)YARN-like lib to embed in a project
instead to call external servers?
YARN lib is difficult to customize?
I made different ques