*This is a known issue. *
https://issues.apache.org/jira/browse/SPARK-3200


Prashant Sharma



On Thu, Mar 3, 2016 at 9:01 AM, Rahul Palamuttam <rahulpala...@gmail.com>
wrote:

> Thank you Jeff.
>
> I have filed a JIRA under the following link :
>
> https://issues.apache.org/jira/browse/SPARK-13634
>
> For some reason the spark context is being pulled into the referencing
> environment of the closure.
> I also had no problems with batch jobs.
>
> On Wed, Mar 2, 2016 at 7:18 PM, Jeff Zhang <zjf...@gmail.com> wrote:
>
>> I can reproduce it in spark-shell. But it works for batch job. Looks like
>> spark repl issue.
>>
>> On Thu, Mar 3, 2016 at 10:43 AM, Rahul Palamuttam <rahulpala...@gmail.com
>> > wrote:
>>
>>> Hi All,
>>>
>>> We recently came across this issue when using the spark-shell and
>>> zeppelin.
>>> If we assign the sparkcontext variable (sc) to a new variable and
>>> reference
>>> another variable in an RDD lambda expression we get a task not
>>> serializable exception.
>>>
>>> The following three lines of code illustrate this :
>>>
>>> val temp = 10
>>> val newSC = sc
>>> val new RDD = newSC.parallelize(0 to 100).map(p => p + temp).
>>>
>>> I am not sure if this is a known issue, or we should file a JIRA for it.
>>> We originally came across this bug in the SciSpark project.
>>>
>>> Best,
>>>
>>> Rahul P
>>>
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>
>

Reply via email to