Re: can't apply mappartitions to dataframe generated from carboncontext

2017-07-12 Thread Mic Sun
modify carboncontext.scala file adding @transient to two member variables and problem solved @transient val sc: SparkContext, @transient val hiveClientInterface = metadataHive - FFCS研究院 -- View this message in context:

Re: can't apply mappartitions to dataframe generated from carboncontext

2017-06-18 Thread Erlu Chen
Hi can you share me your test steps for reproducing this issue ? I mean completed test steps. Thanks. Chenerlu. -- View this message in context:

Re: 答复: can't apply mappartitions to dataframe generated from carboncontext

2017-06-16 Thread Erlu Chen
Hi I think you can debug in windows by adding some debug parameter when start spark-shell in linux. This is what called remote debug. I tried this method when I use windows, hope my idea can help you. Regards. Chenerlu. -- View this message in context:

Re: can't apply mappartitions to dataframe generated from carboncontext

2017-06-12 Thread Mic Sun
org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304) at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294) at

Re: can't apply mappartitions to dataframe generated from carboncontext

2017-06-11 Thread Erlu Chen
Hi, Mic sun Can you ping your error message directly ? It seems I can't get access to your appendix. Thanks in advance. Regards. Chenerlu. -- View this message in context: