yes I tried doing that but that doesn't work.

I'm looking at using SQLContext and dataframes. Is SQLCOntext serializable?

On Mon, Jan 18, 2016 at 1:29 PM, Ted Yu <yuzhih...@gmail.com> wrote:

> Did you mean constructing SparkContext on the worker nodes ?
>
> Not sure whether that would work.
>
> Doesn't seem to be good practice.
>
> On Mon, Jan 18, 2016 at 1:27 PM, Giri P <gpatc...@gmail.com> wrote:
>
>> Can we use @transient ?
>>
>>
>> On Mon, Jan 18, 2016 at 12:44 PM, Giri P <gpatc...@gmail.com> wrote:
>>
>>> I'm using spark cassandra connector to do this and the way we access
>>> cassandra table is
>>>
>>> sc.cassandraTable("keySpace", "tableName")
>>>
>>> Thanks
>>> Giri
>>>
>>> On Mon, Jan 18, 2016 at 12:37 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>>>
>>>> Can you pass the properties which are needed for accessing Cassandra
>>>> without going through SparkContext ?
>>>>
>>>> SparkContext isn't designed to be used in the way illustrated below.
>>>>
>>>> Cheers
>>>>
>>>> On Mon, Jan 18, 2016 at 12:29 PM, gpatcham <gpatc...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I have a use case where I need to pass sparkcontext in map function
>>>>>
>>>>> reRDD.map(row =>method1(row,sc)).saveAsTextFile(outputDir)
>>>>>
>>>>> Method1 needs spark context to query cassandra. But I see below error
>>>>>
>>>>> java.io.NotSerializableException: org.apache.spark.SparkContext
>>>>>
>>>>> Is there a way we can fix this ?
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>> http://apache-spark-user-list.1001560.n3.nabble.com/using-spark-context-in-map-funciton-TASk-not-serilizable-error-tp25998.html
>>>>> Sent from the Apache Spark User List mailing list archive at
>>>>> Nabble.com.
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to