-
From: "Amit Hora" <hora.a...@gmail.com>
Sent: 4/13/2016 11:41 AM
To: "Jörn Franke" <jornfra...@gmail.com>
Cc: "user@spark.apache.org" <user@spark.apache.org>
Subject: RE: Unable to Access files in Hadoop HA enabled from using Spark
There are
There are DNS entries for both of my namenode
Ambarimaster is standby and it resolves to ip perfectly
Hdp231 is active and it also resolves to ip
Hdpha is my Hadoop HA cluster name
And hdfs-site.xml has entries related to these configuration
-Original Message-
From: "Jörn Franke"
Thanks for sharing the link.Yes I understand that accumulators and broadcast
variables state are not recovered from checkpoint but is there any way by which
I can say that the HBaseContext in this context should nt be recovered from
checkpoint rather must be reinitialized
-Original
One region
-Original Message-
From: "Ted Yu"
Sent: 20-10-2015 15:01
To: "Amit Singh Hora"
Cc: "user"
Subject: Re: Spark opening to many connection with zookeeper
How many regions do your table have ?
Which hbase
and storehadoopdataset method?
-Original Message-
From: "Amit Hora" <hora.a...@gmail.com>
Sent: 20-10-2015 20:38
To: "Ted Yu" <yuzhih...@gmail.com>
Cc: "user" <user@spark.apache.org>
Subject: RE: Spark opening to many connection with zookeeper
I
Request to share if you come across any hint
-Original Message-
From: "Ted Yu" <yuzhih...@gmail.com>
Sent: 20-10-2015 21:30
To: "Amit Hora" <hora.a...@gmail.com>
Cc: "user" <user@spark.apache.org>
Subject: Re: Spark opening to many
-Original Message-
From: "Ted Yu" <yuzhih...@gmail.com>
Sent: 20-10-2015 20:19
To: "Amit Hora" <hora.a...@gmail.com>
Cc: "user" <user@spark.apache.org>
Subject: Re: Spark opening to many connection with zookeeper
Can you take a look at exa
-
From: "Ted Yu" <yuzhih...@gmail.com>
Sent: 20-10-2015 21:30
To: "Amit Hora" <hora.a...@gmail.com>
Cc: "user" <user@spark.apache.org>
Subject: Re: Spark opening to many connection with zookeeper
I need to dig deeper into saveAsHadoopDataset to see
Hi,
Regresta for delayed resoonse
please find below full stack trace
ava.lang.ClassCastException: scala.runtime.BoxedUnit cannot be cast to
org.apache.hadoop.hbase.client.Mutation
at
org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:85)
at