Hello!

With otherwrite=true, Data Streamer used in underlying implementation will
skip Cache Store (along with other things).

Regards,
-- 
Ilya Kasnacheev


ср, 13 мар. 2019 г. в 18:10, Harshal Patil <harshal.pa...@mindtickle.com>:

> Hi Ilya ,
> Thanks for the solution , it worked .
> But can you please explain why  overwrite = true is required in case when
> i run spark in master slave configuration .
>
> On Wed, Mar 13, 2019 at 8:26 PM Ilya Kasnacheev <ilya.kasnach...@gmail.com>
> wrote:
>
>> Hello!
>>
>> Please try savePairs(rdd, true).
>>
>> Hope it helps!
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 13 мар. 2019 г. в 17:41, Harshal Patil <harshal.pa...@mindtickle.com
>> >:
>>
>>> Hi ,
>>> I  am using SPARK 2.3.0 and Ignite 2.7.0 . I have enabled POSTGRES as
>>> persistent store through gridgain automatic RDBMS integration .I have
>>> enabled write through cache .  I could see data being persisted in POSTGRES
>>> when I am running SPARK in standalone mode , with
>>>
>>> *val *conf = *new *SparkConf()
>>>
>>> conf.setMaster("local[*]")
>>>
>>>
>>> *But* when I have master configured for spark like,
>>>
>>>
>>> conf.setMaster("spark://harshal-patil.local:7077")
>>>
>>>
>>> my data is not getting persisted in postgres , but i can see cache is
>>> updated .
>>>
>>>
>>> I am doing operation - ic.fromCache("RoleCache").savePairs(rdd)
>>>
>>>
>>> Please help me understand what could be going wrong .
>>>
>>>
>>> Regards ,
>>>
>>> Harshal
>>>
>>>
>>>
>>>

Reply via email to