Thanks for the update. we are using 2.0 version.

so planning to write own custom logic to remove the null values.

Thanks,
selvam R

On Fri, Aug 26, 2016 at 9:08 PM, Russell Spitzer <russell.spit...@gmail.com>
wrote:

> Cassandra does not differentiate between null and empty, so when reading
> from C* all empty values are reported as null. To avoid inserting nulls
> (avoiding tombstones) see
>
> https://github.com/datastax/spark-cassandra-connector/
> blob/master/doc/5_saving.md#globally-treating-all-nulls-as-unset
>
> This will not prevent those columns from being read as null though, it
> will only skip writing tombstones.
>
> On Thu, Aug 25, 2016, 1:23 PM Selvam Raman <sel...@gmail.com> wrote:
>
>> Hi ,
>>
>> Dataframe:
>> colA colB colC colD colE
>> 1 2 3 4 5
>> 1 2 3 null null
>> 1 null null  null 5
>> null null  3 4 5
>>
>> I want to insert dataframe to nosql database, where null occupies
>> values(Cassandra). so i have to insert the column which has non-null values
>> in the row.
>>
>> Expected:
>>
>> Record 1: (1,2,3,4,5)
>> Record 2:(1,2,3)
>> Record 3:(1,5)
>> Record 4:(3,4,5)
>>
>> --
>> Selvam Raman
>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>
>


-- 
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"

Reply via email to