Do you got any error message in your log ?
2014-04-18 10:07 GMT+08:00 :
> Hi,
>
>
>
> When I run program as followed,connect to a remote host not existed,
> always return “HDFS connect OK.”
>
>
>
>
>
> int main()
>
> {
>
> hdfsFS fs = NULL;
>
> fs = hdfsConnect("172.16.19.28",
Hi All,
I am running Nutch on Hadoop2.3.0 and I get Hadoop Spill Exception even
though the disk space that is being utilized is only 30% of the available.
I am looking at the syslog file on the userlogs directory however , I do
not see much information there except the following line. How do I k
Hi,
When I run program as followed,connect to a remote host not existed, always
return “HDFS connect OK.”
int main()
{
hdfsFS fs = NULL;
fs = hdfsConnect("172.16.19.28", 8020);
if (fs == NULL)
{
printf("HDFS connect error.\n");
Hi,
Following seq is done
hdfs dfs -mkdir /a
take snapshot s_0
hdfs dfs -mkdir -p /a/b/c
hdfs dfs -put foo /a/b/c
take snapshot s_1
Now the command line snapshotdiff between s_0 and s_1 shows just the
addition of directory "b". It should show addition of directory "b/c" as
well as addition of "b
Do you want add "-Xmx4g" to your MR tasks? if so, just add it as
"mapred.child.java.opts" in the mapred-site.xml
On Fri, Apr 18, 2014 at 9:35 AM, Andy Srine wrote:
> Quick question. How would I pass the following JVM option to the Hadoop
> command line?
>
> "-Xmx4G"
>
> hadoop jar
>
> Thank
Quick question. How would I pass the following JVM option to the Hadoop
command line?
"-Xmx4G"
hadoop jar
Thanks,
Andy
Hadoop 2.4.0 doesn't has the known issue now. I think it's a stable release
even if it's not in the stable download list. the only one issue I met is
that you should upgrade Hive to Hive-0.12.0 after upgrade to 2.4.0 for the
API compatible.
On Fri, Apr 18, 2014 at 1:07 AM, MrAsanjar . wrote:
>
There is because your HDFS has no space left. please check your datanodes
are all started. also please check dfs.datanode.du.reserved in
hdfs-site.xml to make sure you don't reserve large capacity.
On Fri, Apr 18, 2014 at 7:42 AM, Shengjun Xin wrote:
> Did you start datanode service?
>
>
> On T
I'm having an issue in client code where there are multiple clusters with HA
namenodes involved. Example setup using Hadoop 2.3.0:
Cluster A with the following properties defined in core, hdfs, etc:
dfs.nameservices=clusterA
dfs.ha.namenodes.clusterA=nn1,nn2
dfs.namenode.rpc-address.clusterA.nn
Did you start datanode service?
On Thu, Apr 17, 2014 at 9:23 PM, Karim Awara wrote:
> Hi,
>
> Whenever I start the hadoop on 24 machines, the following exception
> happens on the jobtracker log file on the namenode: I would appreciate
> any help. Thank you.
>
>
>
> 2014-04-17 16:16:31,391 INFO
Hi all,
How stable is hadoop 2.4.0? what are the known issues? anyone has done an
extensive testing on it?
Thanks in advance..
Howdy -
I'm running 2.3.0 with yarn and was trying to experiment with bad record
skipping (i.e. mapreduce.map.skip.maxrecords), but it doesn't seem like skip
mode is supported for mapreduce jobs under yarn anymore. Can anyone confirm
this?
Thanks!
rob
Yes, you are right.
Skip mode is not supported in YARN.
Thanks,
-Nirmal
From: Rob Golkosky
Sent: Thursday, April 17, 2014 8:52 PM
To: user@hadoop.apache.org
Subject: skip mode with yarn?
Howdy -
I'm running 2.3.0 with yarn and was trying to experiment
hi,
I still dont think the problem with the file size or so. This is a snap of
the job tracker log just as i start hadoop. It seems it has an exception
already.
2014-04-17 16:16:31,391 INFO org.apache.hadoop.mapred.
JobTracker: Setting safe mode to false. Requested by : karim
2014-04-17 16:16:31,
Hi,
Whenever I start the hadoop on 24 machines, the following exception happens
on the jobtracker log file on the namenode: I would appreciate any help.
Thank you.
2014-04-17 16:16:31,391 INFO org.apache.hadoop.mapred.JobTracker: Setting
safe mode to false. Requested by : karim
2014-04-17 16:
If cluster is undergoing some networking issue, it means that HDFS
shouldn't be working as well right? The cluster is quite free for my job
with high specification 18GB memory each machine, quad core.
--
Best Regards,
Karim Ahmed Awara
On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar wrote:
> you n
you need to allocate sufficient memory to datanodes as well.
Also make sure that none of the network cards on your datanodes have turned
bad.
Most of the time the error you saw comes when there is heavy utilization of
cluser or it is undergoing some kind of network issue.
On Thu, Apr 17, 2014 at
im setting mapred.child.java.opts to Xmx8G.my dataset im using for the
mapreduce job is quite small (few hundred megabytes) as well.
--
Best Regards,
Karim Ahmed Awara
On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar wrote:
> Can you tell us JVM memory allocated to all data nodes?
>
>
> On Thu
Can you tell us JVM memory allocated to all data nodes?
On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara wrote:
> Hi,
>
> I am running a mpreduce job on a cluster of 16 machines. The HDFS is
> working normally however, when I ran a mapreduce job, it gives an error:
>
> Java.io.IOException: Bad conn
Solved.. problem was with the /etc/hosts.
--
Best Regards,
Karim Ahmed Awara
On Thu, Apr 17, 2014 at 12:38 PM, Karim Awara wrote:
> Hi,
>
> I am running a normal map-reduce job. The map phases finishes but the
> reduce phases does not start. However, the job is still running but seems
> it is j
Hi,
I am running a mpreduce job on a cluster of 16 machines. The HDFS is
working normally however, when I ran a mapreduce job, it gives an error:
Java.io.IOException: Bad connect act with firstBadLink
although I have all the processes up..
--
Best Regards,
Karim Ahmed Awara
--
--
Hi everyone!
I've started looking about logs retention on Hadoop and noticed
interesting option in default Hadoop log4j.properties configuration
# 30-day backup
# log4j.appender.DRFA.MaxBackupIndex=30
I've enabled it and it produced no effect on existing files even after
rotation happened. Tha
Hi,
I am running a normal map-reduce job. The map phases finishes but the
reduce phases does not start. However, the job is still running but seems
it is just halting at the reduce phase.
Note: I am running hadoop 1.2 pseudo-dsitributed on a single node.
below is a snap of the log file for
2
Thanks, because the hostname include "_"。
2014-04-17 12:39 GMT+08:00 Shengjun Xin :
> Maybe a configuration problem, what's the content of configuration?
>
>
> On Thu, Apr 17, 2014 at 10:40 AM, 易剑 wrote:
>
>> *How to solve the following problem?*
>>
>>
>> *hadoop-hadoop-secondarynamenode-Tencen
24 matches
Mail list logo