Hope this may help you:
http://blogs.impetus.com/big_data/big_data_technologies/SnappyCompressionInHBase.do
On Thu, Dec 11, 2014 at 7:25 AM, Fabio wrote:
> Plain Apache Hadoop 2.5.0.
> Too bad it didn't work, hope someone can help.
>
>
> On 12/10/2014 06:22 PM, peterm_second wrote:
>
>> Hi Fabi
Plain Apache Hadoop 2.5.0.
Too bad it didn't work, hope someone can help.
On 12/10/2014 06:22 PM, peterm_second wrote:
Hi Fabio ,
Thanks for the reply, but unfortunately it didn't work. I am using
vanilla hadoop 2.4 with vanilla hive 0.14 and so on, I am using the
vanilla distros.
I did set th
Hi,
although the cluster has 128MB the client always goes with the configuration
local to it. So in this case it will use the 64MB.
Date: Wed, 10 Dec 2014 15:27:07 -0500
Subject: HDFS block size question
From: sajid...@gmail.com
To: user@hadoop.apache.org
Hello All,
If the HDFS block size
Check this out:
http://ofirm.wordpress.com/2014/02/01/exploring-the-hdfs-default-value-behaviour/
"It seems that the value of *dfs.block.size* is dictated directly by the
client, regarding of the cluster setting. If a value is not specified, the
client just picks the default value. This finding is
Hello All,
If the HDFS block size is to 128MB on the cluster and on the Client its set
to 64MB, What will be the size of the Block when its written to HDFS.?
Can any please point me to the link where I can find more information.
Thanks
Sajeeth
> I am aware that one can add names to dfs.hosts and run dfsadmin
> -refreshNodes, but with Kerberos I have the additional problem that the new
> hosts' principals have to be added to hadoop.security.auth_to_local (I do not
> have the luxury of an easy albeit secure pattern for host names). Ala
Sorry I should've mentioned this. I've installed snappy lib using
apt-get , my hadoop had no snappy support build in.
Peter
On 10.12.2014 19:28, Ted Yu wrote:
See:
https://issues.apache.org/jira/browse/HADOOP-9911
Can you recompile snappy for 64-bit system ?
Cheers
On Wed, Dec 10, 2014 at
Hi everyone,
I'm trying to run C++ code in Hadoop, following the explanation posted in:
http://cs.smith.edu/dftwiki/index.php/Hadoop_Tutorial_2.2_--_Running_C%2B%2B_Programs_on_Hadoop
But I'm facing the following problem when running the compiled code:
java.lang.Exception: java.lang.NullPointerE
Is this MapReduce application?
MR has a concept of blacklisting nodes where a lot of tasks fail. The configs
that control it are
- yarn.app.mapreduce.am.job.node-blacklisting.enable: True by default
- mapreduce.job.maxtaskfailures.per.tracker: Default is 3, meaning a node is
blacklisted if it
Replies inline
> Here is my question: is there a mechanisms that when one container exit
> abnormally, yarn will prefer to dispatch the container on other NM?
Acting on container exit is a responsibility left to ApplicationMasters. For
e.g. MapReduce ApplicationMaster explicitly tells YARN t
See:
https://issues.apache.org/jira/browse/HADOOP-9911
Can you recompile snappy for 64-bit system ?
Cheers
On Wed, Dec 10, 2014 at 9:22 AM, peterm_second wrote:
> Hi Fabio ,
> Thanks for the reply, but unfortunately it didn't work. I am using vanilla
> hadoop 2.4 with vanilla hive 0.14 and so
Hi Fabio ,
Thanks for the reply, but unfortunately it didn't work. I am using
vanilla hadoop 2.4 with vanilla hive 0.14 and so on, I am using the
vanilla distros.
I did set the HADOOP_COMMON_LIB_NATIVE_DIR but that didn't make any
change. What version were you using ?
Peter
On 10.12.2014 16
Hello everyone,
I'm trying to instantiate ZlibDecompressor using following constructor
ZlibDecompressor inflater=new
ZlibDecompressor(ZlibDecompressor.CompressionHeader.DEFAULT_HEADER,1024);
This constructor call throws an exception
java.lang.UnsatisfiedLinkError:
org.apache.hadoop.io.compr
Hello,
how would you guys go about adding additional nodes to a Hadoop cluster running
with Kerberos, preferably without restarting the
namenode/resourcemanager/hbase-master etc?
I am aware that one can add names to dfs.hosts and run dfsadmin -refreshNodes,
but with Kerberos I have the additio
Not sure it will help, but if the problem is native library loading, I
spent a lng time trying anything to make it work.
I may suggest to try also:
export JAVA_LIBRARY_PATH=/opt/yarn/hadoop-2.5.0/lib/native
export HADOOP_COMMON_LIB_NATIVE_DIR=/opt/yarn/hadoop-2.5.0/lib
I have this both in the
Hi guys,
I have a hadoop + hbase + hive application,
For some reason my cluster is unable to find the snappy native library
Here is the exception :
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method)
It seems there is a blacklist in yarn when all containers of one NM lost, it
will add this NM to blacklist? Then when will the NM go out of blacklist?
On 2014/12/10 13:39, scwf wrote:
Hi, all
Here is my question: is there a mechanisms that when one container exit
abnormally, yarn will prefe
17 matches
Mail list logo