Hi Ted, all,
do you have any advice regarding my questions in my initial email?
I tried Spark 1.5.2 and 1.6.0 without success. The problem seems to be
that RDDs use some transient fields which are not restored when they
are recovered from checkpoint files. In case of some RDD
implementations
I looked at the places in SparkContext.scala where NewHadoopRDD is
constrcuted.
It seems the Configuration object shouldn't be null.
Which hbase release are you using (so that I can see which line the NPE
came from) ?
Thanks
On Fri, Mar 18, 2016 at 8:05 AM, Lubomir Nerad
wrote:
> Hi,
>
> I tri
The HBase version is 1.0.1.1.
Thanks,
Lubo
On 18.3.2016 17:29, Ted Yu wrote:
I looked at the places in SparkContext.scala where NewHadoopRDD is
constrcuted.
It seems the Configuration object shouldn't be null.
Which hbase release are you using (so that I can see which line the
NPE came from)
This is the line where NPE came from:
if (conf.get(SCAN) != null) {
So Configuration instance was null.
On Fri, Mar 18, 2016 at 9:58 AM, Lubomir Nerad
wrote:
> The HBase version is 1.0.1.1.
>
> Thanks,
> Lubo
>
>
> On 18.3.2016 17:29, Ted Yu wrote:
>
> I looked at the places in SparkContex
Hi,
I tried to replicate the example of joining DStream with lookup RDD from
http://spark.apache.org/docs/latest/streaming-programming-guide.html#transform-operation.
It works fine, but when I enable checkpointing for the StreamingContext
and let the application to recover from a previously cr