Hi I think java's native grammer is not support that ,you should do it by
declaring a variable with java ' way like
int parameter = 100;
spark.sql(" select * from table where parameter = "+parameter)
-- --
??: "Mann Du";
:
Shadowed with
object MyObject {
def mymethod(param: MyParam) = actual_function(param)
}
class MyObject {
import MyObject._
session.map { ... =>
mymethod(...)
}
}
does the job.
Thanks for the advice!
‐‐‐ Original Message ‐‐‐
On Friday, November 30, 2018 9:26 AM, wrote:
>
I think if your job is running and you want to deploy a new jar which is the
new version for the other, spark will think the new jar is another job ,
they distinguish job by Job ID , so if you want to replace the jar ,you have
to kil job every time;
--
When processing data, I create an instance of RDD[Iterable[MyCaseClass]] and I
want to convert it to RDD[MyCaseClass] so that it can be further converted to
dataset or dataframe with toDS() function. But I encounter a problem that
SparkContext can not be instantiated within SparkSession.map
If it’s just a couple of classes and they are actually suitable for serializing
and you have the source code then you can shadow them in your own project with
the serializable interface added. Your shadowed classes should be on the
classpath before the library’s versions which should lead to