Hi,
Everytime I start the spark-shell I encounter this message:
14/11/18 00:27:43 WARN hdfs.BlockReaderLocal: The short-circuit local reads
feature cannot be used because libhadoop cannot be loaded.
Any idea how to overcome it ?
the short-circuit feature is a big perfomance boost I don't want
://blog.cloudera.com/blog/2013/08/how-improved-short-circuit-local-reads-bring-better-performance-and-security-to-hadoop/
Has anyone been using this in production - curious as to if it made a
significant difference from a Spark perspective.
in newer versions of Hadoop but I haven't verified it.
-Kay
-- Forwarded message --
From: Andrew Ash and...@andrewash.com
Date: Tue, Sep 30, 2014 at 1:33 PM
Subject: Re: Short Circuit Local Reads
To: Matei Zaharia matei.zaha...@gmail.com
Cc: user@spark.apache.org user
verified it.
-Kay
-- Forwarded message --
From: Andrew Ash and...@andrewash.com
Date: Tue, Sep 30, 2014 at 1:33 PM
Subject: Re: Short Circuit Local Reads
To: Matei Zaharia matei.zaha...@gmail.com
Cc: user@spark.apache.org user@spark.apache.org, Gary Malouf
malouf.g
Cloudera had a blog post about this in August 2013:
http://blog.cloudera.com/blog/2013/08/how-improved-short-circuit-local-reads-bring-better-performance-and-security-to-hadoop/
Has anyone been using this in production - curious as to if it made a
significant difference from a Spark perspective.
about this in August 2013:
http://blog.cloudera.com/blog/2013/08/how-improved-short-circuit-local-reads-bring-better-performance-and-security-to-hadoop/
Has anyone been using this in production - curious as to if it made a
significant difference from a Spark perspective.