Thanks for the follow-up, Dale.
bq. hdp 2.3.1
Minor correction: should be hdp 2.1.3
Cheers
On Sat, Mar 28, 2015 at 2:28 AM, Johnson, Dale daljohn...@ebay.com wrote:
Actually I did figure this out eventually.
I’m running on a Hortonworks cluster hdp 2.3.1 (hadoop 2.4.1). Spark
bundles
Actually I did figure this out eventually.
I’m running on a Hortonworks cluster hdp 2.3.1 (hadoop 2.4.1). Spark bundles
the org/apache/hadoop/hdfs/… classes along with the spark-assembly jar. This
turns out to introduce a small incompatibility with hdp 2.3.1. I carved these
classes out of
Yes, I could recompile the hdfs client with more logging, but I don’t have the
day or two to spare right this week.
One more thing about this, the cluster is Horton Works 2.1.3 [.0]
They seem to have a claim of supporting spark on Horton Works 2.2
Dale.
From: Ted Yu
Probably guava version conflicts issue. What spark version did you use, and
which hadoop version it compile against?
Thanks.
Zhan Zhang
On Mar 27, 2015, at 12:13 PM, Johnson, Dale
daljohn...@ebay.commailto:daljohn...@ebay.com wrote:
Yes, I could recompile the hdfs client with more logging,
There seems to be a special kind of corrupted according to Spark state of
file in HDFS. I have isolated a set of files (maybe 1% of all files I need
to work with) which are producing the following stack dump when I try to
sc.textFile() open them. When I try to open directories, most large
Looks like the following assertion failed:
Preconditions.checkState(storageIDsCount == locs.size());
locs is ListDatanodeInfoProto
Can you enhance the assertion to log more information ?
Cheers
On Thu, Mar 26, 2015 at 3:06 PM, Dale Johnson daljohn...@ebay.com wrote:
There seems to be a