When processing data, I create an instance of RDD[Iterable[MyCaseClass]] and I 
want to convert it to RDD[MyCaseClass] so that it can be further converted to 
dataset or dataframe with toDS() function. But I encounter a problem that 
SparkContext can not be instantiated within SparkSession.map function because 
it already exists, even with allowMultipleContexts set to true.

    val sc = new SparkConf()
    sc.set("spark.driver.allowMultipleContexts", "true")
    new SparkContext(sc).parallelize(seq)

How can I fix this?

Thanks.

Reply via email to