t;
> null means that "hbase.defaults.for.version" was not set in the other
> hbase-default.xml
>
> Can you retrieve the classpath of Spark task so that we can have more clue
> ?
>
>
> Cheers
>
> On Tue, Nov 17, 2015 at 10:06 PM, 임정택 <kabh...@gmail.com> wr
hih...@gmail.com>:
> I am a bit curious:
> Hbase depends on hdfs.
> Has hdfs support for Mesos been fully implemented ?
>
> Last time I checked, there was still work to be done.
>
> Thanks
>
> On Nov 17, 2015, at 1:06 AM, 임정택 <kabh...@gmail.com> wrote:
>
> O
lts.for.version.skip as true in your
> hbase-site.xml
>
> Cheers
>
> On Tue, Nov 17, 2015 at 1:01 AM, 임정택 <kabh...@gmail.com> wrote:
>
>> Hi all,
>>
>> I'm evaluating zeppelin to run driver which interacts with HBase.
>> I use fat jar to inc
Oh, one thing I missed is, I built Spark 1.4.1 Cluster with 6 nodes of
Mesos 0.22.1 H/A (via ZK) cluster.
2015-11-17 18:01 GMT+09:00 임정택 <kabh...@gmail.com>:
> Hi all,
>
> I'm evaluating zeppelin to run driver which interacts with HBase.
> I use fat jar to include HBase dep
Hi all,
I'm evaluating zeppelin to run driver which interacts with HBase.
I use fat jar to include HBase dependencies, and see failures on executor
level.
I thought it is zeppelin's issue, but it fails on spark-shell, too.
I loaded fat jar via --jars option,
> ./bin/spark-shell --jars
(HeartSaVioR)
2015-11-17 18:06 GMT+09:00 임정택 <kabh...@gmail.com>:
> Oh, one thing I missed is, I built Spark 1.4.1 Cluster with 6 nodes of
> Mesos 0.22.1 H/A (via ZK) cluster.
>
> 2015-11-17 18:01 GMT+09:00 임정택 <kabh...@gmail.com>:
>
>> Hi all,
>>
>>