Hi,
Everytime I start the spark-shell I encounter this message:
14/11/18 00:27:43 WARN hdfs.BlockReaderLocal: The short-circuit local reads
feature cannot be used because libhadoop cannot be loaded.
Any idea how to overcome it ?
the short-circuit feature is a big perfomance boost I don't wa
This was supposedly fixed
>> in newer versions of Hadoop but I haven't verified it.
>>
>> -Kay
>>
>>>
>>>
>>> -- Forwarded message --
>>> From: Andrew Ash
>>> Date: Tue, Sep 30, 2014 at 1:33 PM
>>> Subje
gt; fixed in newer versions of Hadoop but I haven't verified it.
>
> -Kay
>
>
>>
>> -- Forwarded message ------
>> From: Andrew Ash
>> Date: Tue, Sep 30, 2014 at 1:33 PM
>> Subject: Re: Short Circuit Local Reads
>> To: Matei Zaharia
y fixed
in newer versions of Hadoop but I haven't verified it.
-Kay
>
> -- Forwarded message --
> From: Andrew Ash
> Date: Tue, Sep 30, 2014 at 1:33 PM
> Subject: Re: Short Circuit Local Reads
> To: Matei Zaharia
> Cc: "user@spark.apache.org" , G
atically benefit from this if you link it to a
> version of HDFS that contains this.
>
> Matei
>
> On September 17, 2014 at 5:15:47 AM, Gary Malouf (malouf.g...@gmail.com)
> wrote:
>
> Cloudera had a blog post about this in August 2013:
> http://blog.cloudera.com/blog/2013/
blog post about this in August 2013:
http://blog.cloudera.com/blog/2013/08/how-improved-short-circuit-local-reads-bring-better-performance-and-security-to-hadoop/
Has anyone been using this in production - curious as to if it made a
significant difference from a Spark perspective.
Cloudera had a blog post about this in August 2013:
http://blog.cloudera.com/blog/2013/08/how-improved-short-circuit-local-reads-bring-better-performance-and-security-to-hadoop/
Has anyone been using this in production - curious as to if it made a
significant difference from a Spark perspective.