??????Java: pass parameters in spark sql query

2018-11-30 Thread 965
Hi I think java's native grammer is not support that ,you should do it by declaring a variable with java ' way like int parameter = 100; spark.sql(" select * from table where parameter = "+parameter) -- -- ??: "Mann Du"; :

Re: Caused by: java.io.NotSerializableException: com.softwaremill.sttp.FollowRedirectsBackend

2018-11-30 Thread James Starks
Shadowed with object MyObject { def mymethod(param: MyParam) = actual_function(param) } class MyObject { import MyObject._ session.map { ... => mymethod(...) } } does the job. Thanks for the advice! ‐‐‐ Original Message ‐‐‐ On Friday, November 30, 2018 9:26 AM, wrote: >

??????Do we need to kill a spark job every time we change and deploy it?

2018-11-30 Thread 965
I think if your job is running and you want to deploy a new jar which is the new version for the other, spark will think the new jar is another job , they distinguish job by Job ID , so if you want to replace the jar ,you have to kil job every time; --

Convert RDD[Iterrable[MyCaseClass]] to RDD[MyCaseClass]

2018-11-30 Thread James Starks
When processing data, I create an instance of RDD[Iterable[MyCaseClass]] and I want to convert it to RDD[MyCaseClass] so that it can be further converted to dataset or dataframe with toDS() function. But I encounter a problem that SparkContext can not be instantiated within SparkSession.map

Re: Caused by: java.io.NotSerializableException: com.softwaremill.sttp.FollowRedirectsBackend

2018-11-30 Thread chris
If it’s just a couple of classes and they are actually suitable for serializing and you have the source code then you can shadow them in your own project with the serializable interface added. Your shadowed classes should be on the classpath before the library’s versions which should lead to