Oh ok. Thanks Chaidy.
I was wondering if I can just use log4j compression facility along with
timebased rolling as in terms of gzip and then use less disk space.
This seems to be feature available in 1.3 (not sure if it is also available
in log4j-1.2.15), I think I need to give it a try and see.
Hey Barry,
How did you start these daemons in the first place?
On Sun, Apr 29, 2012 at 1:16 AM, Barry, Sean F sean.f.ba...@intel.com wrote:
hduser@master:~ /usr/java/jdk1.7.0/bin/jps
20907 TaskTracker
20629 SecondaryNameNode
25863 Jps
20777 JobTracker
20383 NameNode
20507 DataNode
I just restarted my machines and it works fine now.
-SB
-Original Message-
From: Harsh J [mailto:ha...@cloudera.com]
Sent: Sunday, April 29, 2012 5:55 AM
To: common-user@hadoop.apache.org
Subject: Re: Can’t stop hadoop daemons
Hey Barry,
How did you start these daemons in the first
Sean,
One cause I can think of is that your PID directory is on /tmp or so,
and the original saved PID files got cleared away by tmpwatch, leading
to this state.
To fix such a flaw, export HADOOP_PID_DIR in hadoop-env.sh to a more
persistent location (such as within HADOOP_HOME/pids itself,
Hi everyone,
i'm want to run hadoop (hbase) in an IBM JVM. I've seen that there were several
patches for that reason. I am not a developer so my knowleges in building java
jars fromsources are very limited and the link with the nightly builds do not
work.
I only need hadoop-core-1.0.3.jar.
It sounds to me like you're running out of DN xceivers. Try the
solution offered at
http://hbase.apache.org/book.html#dfs.datanode.max.xcievers
I.e., add:
property
namedfs.datanode.max.xcievers/name
value4096/value
/property
To your DNs' config/hdfs-site.xml and restart the
Hi everyone,
i'm want to run hadoop (hbase) in an IBM JVM. I've seen that there were several
patches for that reason. I am not a developer so my knowleges in building java
jars fromsources are very limited and the link with the nightly builds do not
work.
I only need hadoop-core-1.0.3.jar.
Thanks for the quick response, appreciate it. It looks like this might be
the issue. But I am still trying to understand what is causing so many
threads in my situation? Is this thread per block that gets created or per
file? Because if it's per file then it should not be more than 15.
My second
Hello I'd like to ask you what is the preferred way of getting running
jobs progress from Java application, that has executed them.
Im using Hadoop 0.20.203, tried job.end.notification.url property that
works well, but as the property name says, it sends only job end
notifications.
What I
Hi Hadoop users,
Has anyone attended a Hadoop conference where there were talks about any
new features in Hadoop security ?
I am trying to figure out if any new features have been added to Hadoop
security after Kerberos .
Thanks
--
Cheers
Atul
Take a look at the JobClient API. You can use that to get the current
progress of a running job.
On Sunday, April 29, 2012, Ondřej Klimpera wrote:
Hello I'd like to ask you what is the preferred way of getting running
jobs progress from Java application, that has executed them.
Im using
Tons of errors seen after Map 100% Reduce 50%, but the job
still struggles to finish. What is the possible reason? Is
this issue fixed in any of the version?
java.net.SocketTimeoutException: 69000 millis timeout while
waiting for channel to be ready for read. ch :
Hi guys :
1) Does anybody know if there is a VM out there which runs EMR hadoop ? I
would like to have a
local vm for dev purposes that mirrored the EMR hadoop instances.
2) How does EMR's hadoop differ from apache hadoop and Cloudera's hadoop ?
--
Jay Vyas
MMSB/UCHC
13 matches
Mail list logo