Scheduler.scala:175)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:745)
>>
&
util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> On Tue, Sep 22, 2015 at 6:49 PM, Adrian Tanase <atan...@adobe.com> wrote:
>
>> Have you tried simply ssc.checkpoint("checkpointā€¯)? This should create it
>
nonfun$run$1.apply(JobScheduler.scala:176)
>>> at
>>> org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:176)
>>> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>> a
7:59 AM
To: user
Subject: Invalid checkpoint url
I am using reduceByKeyAndWindow (with inverse reduce function) in my code.
In order to use this, it seems the checkpointDirectory which i have to use
should be hadoop compatible file system.
Does that mean that, i should setup hadoop on my syst
ocal
> mode.
>
> For the others (/tmp/..) make sure you have rights to write there.
>
> -adrian
>
> From: srungarapu vamsi
> Date: Tuesday, September 22, 2015 at 7:59 AM
> To: user
> Subject: Invalid checkpoint url
>
> I am using reduceByKeyAndWindow (with inverse re
I am using reduceByKeyAndWindow (with inverse reduce function) in my code.
In order to use this, it seems the checkpointDirectory which i have to use
should be hadoop compatible file system.
Does that mean that, i should setup hadoop on my system.
I googled about this and i found in a S.O answer