It appears to me that whatever chunk of the input CSV files your map
task 000149 gets, the program is unable to process it and throws an
error and exits.
Look into the attempt_1395628276810_0062_m_000149_0 attempt's task log
to see if there's any stdout/stderr printed that may help. The syslog
in
Hi,
I am getting following exception while running word count example,
14/04/10 15:17:09 INFO mapreduce.Job: Task Id :
attempt_1397123038665_0001_m_00_2, Status : FAILED
Container launch failed for container_1397123038665_0001_01_04 :
java.lang.IllegalArgumentException: Does not contain
Rahul,
Please check the port name given in mapred.site.xml
Thanks
Kiran
On Thu, Apr 10, 2014 at 3:23 PM, Rahul Singh smart.rahul.i...@gmail.comwrote:
Hi,
I am getting following exception while running word count example,
14/04/10 15:17:09 INFO mapreduce.Job: Task Id :
Thanks !!!
Diwakar
Sent from my iPhone
On Apr 9, 2014, at 9:22 PM, Harsh J ha...@cloudera.com wrote:
You could look at metrics the NN publishes, or look at/process the
HDFS audit log.
On Wed, Apr 9, 2014 at 6:36 PM, Diwakar Sharma diwakar.had...@gmail.com
wrote:
How and where to
here is my mapred.site.xml config
property
namemapred.job.tracker/name
valuelocalhost:54311/value
descriptionThe host and port that the MapReduce job tracker runs
at. If local, then jobs are run in-process as a single map
and reduce task.
/description
/property
Also, The job runs
Hi,
I wrote a custom InputFormat and InputSplit to handle netcdf file. I use
with a custom pig Load function. When I submitted a job by running a pig
script. I got an error below. From the error log, the network location
name is
Hi All,
I currently have a hadoop 2.0 cluster in production, I want to upgrade to
latest release.
current version:
[root@doop1 ~]# hadoop version
Hadoop 2.0.0-cdh4.6.0
Cluster has the following services:
hbase
hive
hue
impala
mapreduce
oozie
sqoop
zookeeper
can someone point me to a howto
Motty,
https://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Installation-Guide/CDH5-Installation-Guide.html
provides instructions to upgrade from CDH4 to CDH5 (which bundles Hadoop
2.3.0).
If you intention is to use CDH5 that should help you. If you have further
i can use fsck to get Over-replicated blocks but how can i track pending
delete ?
On Thu, Apr 10, 2014 at 10:50 AM, Harsh J ha...@cloudera.com wrote:
The replica deletion is asynchronous. You can track its deletions via
the NameNode's over-replicated blocks and the pending delete metrics.
On
i set replica number from 3 to 2,but i dump NN metrics ,the
PendingDeletionBlocks is zero ,why?
if the check thread will sleep a interval then do it's check work ,how long
the interval time is?
On Thu, Apr 10, 2014 at 10:50 AM, Harsh J ha...@cloudera.com wrote:
The replica deletion is
hi,maillist:
my HDFS cluster run about 1 year ,and i find many dir is very
large,i wonder if some of them can be clean?
like
/var/log/hadoop-yarn/apps
AFAIK, no tools now.
Regards,
*Stanley Shi,*
On Fri, Apr 11, 2014 at 9:09 AM, ch huang justlo...@gmail.com wrote:
hi,maillist:
how can i archive old data in HDFS ,i have lot of old data ,the
data will not be use ,but it take lot of space to store it ,i want to
archive and zip the
Hadoop-2.4 is release, where can I download the hadoop-2.4 code from?
Thanks,
LiuLei
http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.0/
On Fri, Apr 11, 2014 at 10:23 AM, lei liu liulei...@gmail.com wrote:
Hadoop-2.4 is release, where can I download the hadoop-2.4 code from?
Thanks,
LiuLei
--
Cheers
-MJ
The official release can be found on:
http://www.apache.org/dyn/closer.cgi/hadoop/common/
But you can also choose to checkout the code from svn/git repository.
On Thu, Apr 10, 2014 at 8:08 PM, Mingjiang Shi m...@gopivotal.com wrote:
Do not use the InputSplit's getLocations() API to supply your file
path, it is not intended for such things, if thats what you've done in
your current InputFormat implementation.
If you're looking to store a single file path, use the FileSplit
class, or if not as simple as that, do use it as a
16 matches
Mail list logo