Re: block replication

2014-01-01 Thread Vishnu Viswanath
thanks Hardik, I did a bit of reading on 'stale' state in HDFS-3703 it says stale state as a state that is between dead and alive. And that the value for marking a node as dead is 10.30 minutes. But can this be configured? Please help. On Wed, Jan 1, 2014 at 2:46 AM, Hardik Pandya wrote: > >

[no subject]

2014-01-01 Thread Saeed Adel Mehraban
I have a Hadoop 2.2.0 installation on 3 VMs, one as master and 2 as slaves. When I try to run simple jobs like provided wordcount sample, if I try to run the job on 1 or a few files, it probably will succeed or not (somehow 50-50 chance of failure) but with more files, I get failure most of the tim

Setting up Snappy compression in Hadoop

2014-01-01 Thread Amit Sela
Hi all, I'm running on Hadoop 1.0.4 and I'd like to use Snappy for map output compression. I'm adding the configurations: configuration.setBoolean("mapred.compress.map.output", true); configuration.set("mapred.map.output.compression.codec", "org.apache.hadoop.io.compress.SnappyCodec"); And I've

Re: Setting up Snappy compression in Hadoop

2014-01-01 Thread Ted Yu
Please take a look at http://hbase.apache.org/book.html#snappy.compression Cheers On Wed, Jan 1, 2014 at 8:05 AM, Amit Sela wrote: > Hi all, > > I'm running on Hadoop 1.0.4 and I'd like to use Snappy for map output > compression. > I'm adding the configurations: > > configuration.setBoolean("m

Re: Map succeeds but reduce hangs

2014-01-01 Thread navaz
Thanks. But I wonder Why map succeeds 100% , How it resolve hostname ? Now reduce becomes 100% but bailing out slave2 and slave 3 . ( But Mappig is succeded for these nodes). Does it looks for hostname only for reduce ? 14/01/01 09:09:38 INFO mapred.JobClient: Running job: job_201401010908_0001

Re: Setting up Snappy compression in Hadoop

2014-01-01 Thread bharath vissapragada
Did you build it for your platform? You can do an "ldd" on the .so file to check if the dependent libs are present. Also make sure you placed it in the right directory for your platform (Linux-amd64-64 or Linux-i386-32) On Wed, Jan 1, 2014 at 10:02 PM, Ted Yu wrote: > Please take a look at http

Re: Re: block replication

2014-01-01 Thread chenfolin
Hi, Vishnu Viswanath: 10.30 minutes = 2 * (conf.getInt("dfs.namenode.heartbeat.recheck-interval" , 5*60*1000 )) + 10 * 1000 * conf.getInt("dfs.heartbeat.interval",3); You must configure "dfs.heartbeat.interval" and "dfs.namenode.heartbeat.recheck-interval". 2014-01-01 chenfolin 发件人:

Re: Map succeeds but reduce hangs

2014-01-01 Thread Hardik Pandya
do you have your hosnames properly configured in etc/hosts? have you tried 192.168.?.? instead of localhost 127.0.0.1 On Wed, Jan 1, 2014 at 11:33 AM, navaz wrote: > Thanks. But I wonder Why map succeeds 100% , How it resolve hostname ? > > Now reduce becomes 100% but bailing out slave2 and sl

Re: Map succeeds but reduce hangs

2014-01-01 Thread navaz
I dont know y it is running on localhost. I have commented it. == *slave1:* Hostname: pc321 hduser@pc321:/etc$ vi hosts #127.0.0.1 localhost loghost localhost.myslice.ch-geni-net.emulab.net 155.98.39.28pc228 155.98.39.121 p

Re: failed MR

2014-01-01 Thread Sanjay Upadhyay
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On Wednesday 01 January 2014 03:46 PM, Saeed Adel Mehraban wrote: > I have a Hadoop 2.2.0 installation on 3 VMs, one as master and 2 > as slaves. When I try to run simple jobs like provided wordcount > sample, if I try to run the job on 1 or a few

Unable to change the virtual memory to be more than the default 2.1 GB

2014-01-01 Thread S.L
Hello Folks, I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB RAM. Whenever I submit a job I get an error that says that the that the virtual memory usage exceeded , like below. I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml to 10 , however th

Re: Split using mapreduce in hbase

2014-01-01 Thread Ranjini Rathinam
> Hi, > > I am using 0.92 version for hbase. > > Ranjini > > On Thu, Jan 2, 2014 at 10:22 AM, Ranjini Rathinam > wrote: > >> >> >> -- Forwarded message -- >> From: Ted Yu >> Date: Tue, Dec 31, 2013 at 8:06 PM >> Subject: Re: Split using mapreduce in hbase >> To: "u...@hbase.apach