Hi All,

 

I notice if we create a spark context in driver, we need to call stop method
to clear it.

 

              SparkConf sparkConf = new
SparkConf().setAppName("FinancialEngineExecutor");

              JavaSparkContext ctx = new JavaSparkContext(sparkConf);

.

              String inputPath =
props[0].getProperty(Constants.S3_INPUT_FILES);

              JavaRDD<String> lines = ctx.textFile(inputPath);

              EngineFlatMapFunction engine = new EngineFlatMapFunction();

              engine.setAnalysisConfiguraitons(props);

              

              lines.mapPartitionsToPair(engine);

.

              ctx.stop();

And if I I have below code in the closure (EngineFlatMapFunction.java)

 

       Configuration hadoopConfiguration = new Configuration(new
JavaSparkContext(new SparkConf()).hadoopConfiguration());

 

Any issue there? Because I need to have the Hadoop configuration in closure
but the Configuration class itself is not serializable, so I retrieve it
from the executor part.

 

Will it have any issue if I create the Spark context in the above code
without call stop on it?

 

Regards,

 

Shuai

Reply via email to