Failed to run wordcount on YARN

2013-07-12 Thread Liu, Raymond
Hi I just start to try out hadoop2.0, I use the 2.0.5-alpha package And follow http://hadoop.apache.org/docs/r2.0.5-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html to setup a cluster in non-security mode. HDFS works fine with client tools. While when I run wordcount example, there

Re: copy files from ftp to hdfs in parallel, distcp failed

2013-07-12 Thread Hao Ren
Le 11/07/2013 20:47, Balaji Narayanan (பாலாஜி நாராயணன்) a écrit : multiple copy jobs to hdfs Thank you for your reply and the link. I read the link before, but I didn't find any examples about copying file from ftp to hdfs. There are about 20-40 file in my directory. I just want to move or

Re: how to add JournalNodes

2013-07-12 Thread Harsh J
You need to restart your NameNodes to get them to use the new QJM 5-host-set configs, and I think you can do that without downtime if you're already in HA mode by restarting one NN at a time. To add new JNs first though, you will currently have to rsync their directory from a good JN to get them

unsubscribe

2013-07-12 Thread Margusja

RE: Taktracker in namenode failure

2013-07-12 Thread Ramya S
Both the map output value class configured and the output value written from the mapper is Text class. So there is no mismatch in the value class. But when the same MR program is run with 2 tasktrackers(without tasktracker in namenode) exception is not occuring. The problem is only with

RE: Taktracker in namenode failure

2013-07-12 Thread Devaraj k
I think, there is mismatch of jar’s coming in the classpath for the map tasks when it runs in different machines. You can find out this, by giving some unique name for your Mapper class, Job Submit class and then submit the Job. Thanks Devaraj k From: Ramya S [mailto:ram...@suntecgroup.com]

Re: Staging directory ENOTDIR error.

2013-07-12 Thread Ram
Hi jay, what hadoop command you are given. Hi, From, Ramesh. On Fri, Jul 12, 2013 at 7:54 AM, Devaraj k devara...@huawei.com wrote: Hi Jay, ** ** Here client is trying to create a staging directory in local file system, which actually should create in HDFS. **

Re: copy files from ftp to hdfs in parallel, distcp failed

2013-07-12 Thread Ram
Hi, Please configure the following in core-ste.xml and try. Use hadoop fs -ls file:/// -- to display local file system files Use hadoop fs -ls ftp://your ftp location -- to display ftp files if it is listing files go for distcp. reference from

Re: Taktracker in namenode failure

2013-07-12 Thread Ram
Hi, The problem is with jar file only, to check run any other MR job or sample wordcount job on namenode tasktracker, if it is running no problem with namenode tasktracker, if not running there may be problem with tasktracker configuration, then compare with other node tasktracker

Hadoop property precedence

2013-07-12 Thread Shalish VJ
Hi,     Suppose block size set in configuration file at client side is 64MB, block size set in configuration file at name node side is 128MB and block size set in configuration file at datanode side is something else. Please advice, If the client is writing a file to hdfs,which property would

UNSUBSCRIBE

2013-07-12 Thread Brent Nikolaus

Re: How are 'PHYSICAL_MEMORY_BYTES' and 'VIRTUAL_MEMORY_BYTES' calculated?

2013-07-12 Thread Vinod Kumar Vavilapalli
They are the running metrics. While the task is running, they will tell you how much pmem/vmem it is using at that point of time. Obviously at the end of job, it will be the last snapshot. Thanks, +Vinod On Jul 12, 2013, at 6:47 AM, Shahab Yunus wrote: I think they are cumulative but per

Re: UNSUBSCRIBE

2013-07-12 Thread sure bhands
Please send an email to user-unsubscr...@hadoop.apache.org to unsubscribe. Thanks, Surendra On Fri, Jul 12, 2013 at 10:24 AM, Brent Nikolaus bnikol...@gmail.comwrote:

Re: Staging directory ENOTDIR error.

2013-07-12 Thread Jay Vyas
This was a very odd error - it turns out that i had created a file, called tmp in my fs root directory, which meant that when the jobs were trying to write to the tmp directory, they ran into the not-a-dir exception. In any case, I think the error reporting in NativeIO class should be revised.

Re: how to get hadoop HDFS path?

2013-07-12 Thread deepak rosario tharigopla
You can get the hdfs file system as follows Configuration conf = new Configuration(); conf.addResource(new Path(/home/dpancras/TradeStation/CassandraPigHadoop/WebContent/WEB-INF/core-site.xml)); conf.addResource(new

Re: how to get hadoop HDFS path?

2013-07-12 Thread deepak rosario tharigopla
Configuration conf = new Configuration(); conf.addResource(new Path(/home/dpancras/TradeStation/CassandraPigHadoop/WebContent/WEB-INF/core-site.xml)); conf.addResource(new Path(/home/dpancras/TradeStation/CassandraPigHadoop/WebContent/WEB-INF/hdfs-site.xml)); FileSystem fs

Re: How are 'PHYSICAL_MEMORY_BYTES' and 'VIRTUAL_MEMORY_BYTES' calculated?

2013-07-12 Thread hadoop qi
Thanks for the response. So they represent the total physical memory (virtual memory) has been allocated to the job (e.g., from heap and stack) during its entire life time? I am still confused how to get the cumulative number from /proc/meminfo. I think from /proc/meminfo we can only get the

Re: How are 'PHYSICAL_MEMORY_BYTES' and 'VIRTUAL_MEMORY_BYTES' calculated?

2013-07-12 Thread Shahab Yunus
As Vinod Kumar Vavilapalli they are indeed snapshots in point and time. So they are neither the peak usage from the whole duration of the job, nor cumulative aggregate that increases over time. Regards, Shahab On Fri, Jul 12, 2013 at 4:47 PM, hadoop qi hadoop@gmail.com wrote: Thanks for

Running hadoop for processing sources in full sky maps

2013-07-12 Thread andrea zonca
Hi, I have few tens of full sky maps, in binary format (FITS) of about 600MB each. For each sky map I already have a catalog of the position of few thousand sources, i.e. stars, galaxies, radio sources. For each source I would like to: open the full sky map extract the relevant section,

How to control of the output of /stacks

2013-07-12 Thread Shinichi Yamashita
Hi, I can see the stack trace of the node when I access /stacks of Web UI. And stack trace is output in the log file of the node, too. Because the expansion of the log file and hard to see it, I don't want to output it in a log file. Is there the method to solve this problem? Regards, Shinichi

Re: How are 'PHYSICAL_MEMORY_BYTES' and 'VIRTUAL_MEMORY_BYTES' calculated?

2013-07-12 Thread Vinod Kumar Vavilapalli
No, every so often, 3 seconds IIRC, it capture pmem and vmem which corresponds to the usage of the process and its children at *that* specific point of time. Cumulative = cumulative across the process and its children. Thanks, +Vinod On Jul 12, 2013, at 1:47 PM, hadoop qi wrote: Thanks for

Maven artifacts for 0.23.9

2013-07-12 Thread Eugene Dzhurinsky
Hello! Where is it possible to get Maven artifacts for recent Hadoop release? Thanks! -- Eugene N Dzhurinsky pgpuMe4RbtBeW.pgp Description: PGP signature

Re: How to control of the output of /stacks

2013-07-12 Thread Harsh J
The logging has sometimes come useful in debugging (i.e. if the stack on the UI went uncaptured, the log helps). It is currently not specifically toggle-able. I suppose it is OK to set it as DEBUG though. Can you file a JIRA for that please? The only way you can disable it right now is by

Re:

2013-07-12 Thread Suresh Srinivas
Please use CDH mailing list. This is apache hadoop mailing list. Sent from phone On Jul 12, 2013, at 7:51 PM, Anit Alexander anitama...@gmail.com wrote: Hello, I am encountering a problem in cdh4 environment. I can successfully run the map reduce job in the hadoop cluster. But when i