Decompression using LZO

2013-08-06 Thread Sandeep Nemuri
Hi all , Can any one help me out . how to decompress the data in hadop using LZO. -- --Regards Sandeep Nemuri

Re: about replication

2013-08-06 Thread manish dunani
*You are wrong at this:* Administrator@DFS-DC /cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin $ ./hadoop dfs -copyFromLocal /cygdrive/c/Users/Administrator/Desktop/hadoop-1.1.2.tar /wksp copyFromLocal: File /cygdrive/c/Users/Administrator/Desktop/hadoop-1.1.2.tar does not exist. Administrator@DFS-DC /c

Question about Hadoop

2013-08-06 Thread 間々田 剛史
Dear Sir we are students at Hosei University. we study hadoop now for reserch. we use Hadoop2.0.0-CDH4.2.1 MRv2 and its environment is centOS 6.2. we can access HDFS from master and slaves. We have some questions. Master:Hadoop04 Slaves:Hadoop01 Hadoop02 Hadoop03 we run the "wordc

Re: issue about try lzo in hadoop

2013-08-06 Thread Sandeep Nemuri
send me your conf file . On Tue, Aug 6, 2013 at 12:09 PM, ch huang wrote: > i use yarn ,and hadoop version is CDH4.3, lzo installed,but when i run > test ,it failed,why? > > # sudo -u hdfs hadoop jar > /usr/lib/hadoop/lib/hadoop-lzo-cdh4-0.4.15-gplextras.jar > com.hadoop.compression.lzo.LzoInde

Re: Question about Hadoop

2013-08-06 Thread manish dunani
After checking ur error code. I think u entered wrong map and reduce class. can u pls show me code?? Then i will tell u correctly where u did the mistake.. On Tue, Aug 6, 2013 at 12:25 PM, 間々田 剛史 wrote: > Dear Sir > > we are students at Hosei University. > we study hadoop now for reserch. > >

Hadoop Source

2013-08-06 Thread Sathwik B P
Hi guys, I wanted to checkout the Hadoop source. I see that there are 3 repo listed under http://git.apache.org/ hadoop-common.git hadoop-hdfs.git hadoop-mapreduce.git I checked out the hadoop-common.git and I see that it has hadoop-common-project, hadoop-hdfs-project, hadoop-yarn-project, hadoo

Re: issue about try lzo in hadoop

2013-08-06 Thread ch huang
i installed hadoop-0.20-mapreduce-2.0.0+1357-1.cdh4.3.0.p0.21.el6.x86_64,and now ok, question is test code use MRV1,not YARN? On Tue, Aug 6, 2013 at 3:27 PM, Sandeep Nemuri wrote: > send me your conf file . > > > On Tue, Aug 6, 2013 at 12:09 PM, ch huang wrote: > >> i use yarn ,and hadoop vers

Re: Hadoop Source

2013-08-06 Thread Harsh J
Hi Sathwik, You only need hadoop-common today. The other two are maintained for historic purposes when the repositories were split across projects. We have some docs on this at http://wiki.apache.org/hadoop/HowToContribute and http://wiki.apache.org/hadoop/QwertyManiac/BuildingHadoopTrunk that sh

Oozie shell action error

2013-08-06 Thread Kasa V Varun Tej
I'm trying to implement a simple ls command through a shell action, but i'm facing an error, *Exact Issue:* *script:* #!/bin/bash ls /home/my-directory *stdout logs:* >>> Invoking Shell command line now >>Exit code of the Shell command 2<<< >>> Invocation of Shell command completed << Invo

Re: Question about Hadoop

2013-08-06 Thread Tatsuo Kawasaki
Hi Tsuyoshi, Did you run "wordcount" sample in hadoop-examples.jar? Can you share the command that you run? Thanks, -- Tatsuo On Tue, Aug 6, 2013 at 3:55 PM, 間々田 剛史 wrote: > Dear Sir > > we are students at Hosei University. > we study hadoop now for reserch. > > we use Hadoop2.0.0-CDH4.2.1 MRv

Hadoop network setup

2013-08-06 Thread Lanati, Matteo
Hi all, the question about the setup of Hadoop with multiple network card has been asked many times, but I couldn't find the info that I needed. Sorry if this is a duplicate: in this case just point me to the right documents. My nodes have two interfaces, eth0 with a public IP and eth1 with a p

Hadoop datanode becomes down

2013-08-06 Thread Manickam P
Hi, I have a n number of file each of around 25GB. I have a cluster set up with 6 machines in data node and one master node. When i move this file from my local to HDFS location sometimes data node becomes down. Any specific reason for this behavior? Or do i need to follow any other way to move

Compilation problem of Hadoop Projects after Import into Eclipse

2013-08-06 Thread Sathwik B P
Hi guys, I see a couple of problem with the generation of eclipse artifacts mvn eclipse:eclipse. There are a couple of compilation issues after importing the hadoop projects into eclipse, though am able to rectify them. 1) hadoop-common: TestAvroSerialization.java doesn't compile as it uses AvroR

RE: Hadoop datanode becomes down

2013-08-06 Thread Devaraj k
Can you find out the reason for going Data node down from Data Node log? Do you get any exception in the client when you try to put the file in HDFS. Thanks Devaraj k From: Manickam P [mailto:manicka...@outlook.com] Sent: 06 August 2013 15:07 To: user@hadoop.apache.org Subject: Hadoop datanode b

Re: setLocalResources() on ContainerLaunchContext

2013-08-06 Thread Harsh J
To be honest, I've never tried loading a HDFS file onto the LocalResource this way. I usually just pass a local file and that works just fine. There may be something in the URI transformation possibly breaking a HDFS source, but try passing a local file - does that fail too? The Shell example uses

Re: throughput metrics in hadoop-2.0.5

2013-08-06 Thread lei liu
There is "@Metric MutableCounterLong fsyncCount" metrics in DataNodeMetrics, the MutableCounterLong class continuously increase the value, so I think the value in ganglia should be "10, 20 ,30, 40" and so on. but the value the value is fsyncCount.value/10, that is in "1 ,1 , 1 , 1" in ganglia.

Re: Hadoop datanode becomes down

2013-08-06 Thread Manoj Sah
HI, May be datanode is down because of - 1.Network issues 2.Size of disk 3.find out the datanode log file.if you get any exception then u try to put the file in HDFS. Thanks Manoj On Tue, Aug 6, 2013 at 3:28 PM, Devaraj k wrote: > Can you find out the reason for going Data node down

Re: throughput metrics in hadoop-2.0.5

2013-08-06 Thread lei liu
Is the the value of MutableCounterLong class set to zreo per 10 seconds? 2013/8/6 lei liu > There is "@Metric MutableCounterLong fsyncCount" metrics in > DataNodeMetrics, the MutableCounterLong class continuously increase the > value, so I think the value in ganglia should be "10, 20 ,30, 40"

Re: throughput metrics in hadoop-2.0.5

2013-08-06 Thread lei liu
Is the the value of MutableCounterLong class set to zero per 10 seconds? 2013/8/6 lei liu > Is the the value of MutableCounterLong class set to zreo per 10 seconds? > > > 2013/8/6 lei liu > >> There is "@Metric MutableCounterLong fsyncCount" metrics in >> DataNodeMetrics, the MutableCounte

Re: setLocalResources() on ContainerLaunchContext

2013-08-06 Thread Krishna Kishore Bonagiri
Hi Harsh, The setResource() call on LocalResource() is expecting an argument of type org.apache.hadoop.yarn.api.records.URL which is converted from a string in the form of URI. This happens in the following call of Distributed Shell example, shellRsrc.setResource(ConverterUtils.getYarnUrlFromUR

Re: Question about Hadoop

2013-08-06 Thread yypvsxf19870706
Hi You need to check your resourcemanager log and the container log which container allocate by your RM. 发自我的 iPhone 在 2013-8-6,15:30,manish dunani 写道: > After checking ur error code. > I think u entered wrong map and reduce class. > > can u pls show me code?? > Then i will tell u co

Re: setLocalResources() on ContainerLaunchContext

2013-08-06 Thread Harsh J
Can you try passing a fully qualified local path? That is, including the file:/ scheme On Aug 6, 2013 4:05 PM, "Krishna Kishore Bonagiri" wrote: > Hi Harsh, >The setResource() call on LocalResource() is expecting an argument of > type org.apache.hadoop.yarn.api.records.URL which is converted

Re: setLocalResources() on ContainerLaunchContext

2013-08-06 Thread Krishna Kishore Bonagiri
I tried the following and it works! String shellScriptPath = "file:///home_/dsadm/kishore/kk.ksh"; But now getting a timestamp error like below, when I passed 0 to setTimestamp() 13/08/06 08:23:48 INFO ApplicationMaster: Got container status for containerID= container_1375784329048_0017_01_02

HDFS 1.1.0 - File Append

2013-08-06 Thread Thomas Stidsborg Sylvest
Hi, I am in the process of developing a framework around Hadoop that enables RabbitMQ messages to be persisted in HDFS. The messages will continuously stream into the system, as they are stock prices or weather data etc. Unfortunately it looks like I will not be able to append to a file in HDFS

Re: HDFS 1.1.0 - File Append

2013-08-06 Thread Adam Muise
Thomas, Try using Flume to ingest the realtime message from RabbitMQ. Flume ingests event data and has pluggable components: source -> channel -> sink. http://flume.apache.org There is an HDFS sink already that allows you to land and bunch data as you like it. It will handle all of the landing l

Re: HDFS 1.1.0 - File Append

2013-08-06 Thread Harsh J
HBase does not use append(…) anywhere. It only syncs(…) over its WAL stream and although sync(…) was an API. Append on 1.x is pretty much broken for various edge cases and therefore also marked unsupported. If you need Flume's append usage to work reliably well, you will need to use a 2.x based HDF

Re: setLocalResources() on ContainerLaunchContext

2013-08-06 Thread Omkar Joshi
Hi, You need to match the timestamp. Probably get the timestamp locally before adding it. This is explicitly done to ensure that file is not updated after user makes the call to avoid possible errors. Thanks, Omkar Joshi *Hortonworks Inc.* On Tue, Aug 6, 2013 at 5:

Re: setLocalResources() on ContainerLaunchContext

2013-08-06 Thread Hitesh Shah
Hi Krishna, YARN downloads a specified local resource on the container's node from the url specified. In all situtations, the remote url needs to be a fully qualified path. To verify that the file at the remote url is still valid, YARN expects you to provide the length and last modified timest

UnSubscribe

2013-08-06 Thread SK
Unsubscribe. Thanks.

Passing arguments in hadoop

2013-08-06 Thread jamal sasha
Hi, I am trying to pass a parameter to multiple mappers So, I do this in my driver conf.set("delimiter", args[3]); In mapper1, I am retreiving this as: Configuration conf = context.getConfiguration(); String[] values = value.toString().split(conf.get("delimiter")); and same is my mapper2 B

Re: Passing arguments in hadoop

2013-08-06 Thread jamal sasha
Never mind guys. I had a typo when I was trying to set configuration param. Sorry. On Tue, Aug 6, 2013 at 4:46 PM, jamal sasha wrote: > Hi, > I am trying to pass a parameter to multiple mappers > > So, I do this in my driver > > conf.set("delimiter", args[3]); > > In mapper1, I am retreiving

Re: setLocalResources() on ContainerLaunchContext

2013-08-06 Thread Harsh J
It is kinda unnecessary to be asking developers to load in timestamps and length themselves. Why not provide a java.io.File, or perhaps a Path accepting API, that gets it automatically on their behalf using the FileSystem API internally? P.s. A HDFS file gave him a FNF, while a Local file gave him

Re: setLocalResources() on ContainerLaunchContext

2013-08-06 Thread Hitesh Shah
@Krishna, your logs showed the file error for "hdfs://isredeng/kishore/kk.ksh" I am assuming you have tried dfs -ls /kishore/kk.ksh and confirmed that the file exists? Also the qualified path seems to be missing the namenode port. I need to go back and check if a path without the port works by

Re: setLocalResources() on ContainerLaunchContext

2013-08-06 Thread Harsh J
Thanks Hitesh! P.s. Port isn't a requirement (and with HA URIs, you shouldn't add a port), but "isredeng" has to be the authority component. On Wed, Aug 7, 2013 at 7:37 AM, Hitesh Shah wrote: > @Krishna, your logs showed the file error for "hdfs://isredeng/kishore/kk.ksh" > > I am assuming you h

Re: setLocalResources() on ContainerLaunchContext

2013-08-06 Thread Krishna Kishore Bonagiri
Hi Harsh, Hitesh & Omkar, Thanks for the replies. I tried getting the last modified timestamp like this and it works. Is this a right thing to do? File file = new File("/home_/dsadm/kishore/kk.ksh"); shellRsrc.setTimestamp(file.lastModified()); And, when I tried using a hdfs file

Too Many CLOSE_WAIT make performance down

2013-08-06 Thread Bing Jiang
Version: HBase: 0.94.3 HDFS: 0.20.* There are too many CLOSE_WAIT connection from RS to DN, and I find the number is over 3. And change the Log-Level of 'org.apache.hadoop.ipc.HBaseServer.trace' to DEBUG, and check that the performance: > Call #2649932; Served: HRegionInterface#get queueTime=

Re: whitelist feature of YARN

2013-08-06 Thread Sandy Ryza
YARN-521, which brings whitelisting to the AMRMClient APIs, is now included in 2.1.0-beta. Check out the doc for the relaxLocality paramater in ContainerRequest in AMRMClient: https://github.com/apache/hadoop-common/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/ap

Re: whitelist feature of YARN

2013-08-06 Thread Krishna Kishore Bonagiri
Hi Sandy, Thanks for the reply and it is good to know YARN-521 is done! Please answer my following questions 1) when is 2.1.0-beta going to be released? is it soon or do you suggest me take it from the trunk or is there a recent release candidate available? 2) I have recently changed my applic