Hi Harsh,
There is only three warnnings in stderr
*stderr logs*
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.mapred.Child).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
for more info.
On Fr
actuall these are all logs in the stderr, and stdout is empty
On Fri, Jan 3, 2014 at 4:12 PM, Azuryy Yu wrote:
> Hi Harsh,
> There is only three warnnings in stderr
>
> *stderr logs*
>
> log4j:WARN No appenders could be found for logger
> (org.apache.hadoop.mapred.Child).
> log4j:WARN Please i
yes i checked the code ,and find the Exception from
lfs.mkdir(userFileCacheDir, null, false);
also find the AM located in CHBM224 ,all will failed but,AM located in
CHBM223,all success
in CHBM224
# ls -l /data/mrlocal/1/yarn/
total 8
drwxrwxrwx 5 yarn yarn 4096 Nov 5 20:50 local
drwxr-xr-
Hi,
I used XMLInputFormat , in that i used Record Reader class. Same as u have
given
THe whole xml is been split into part For Eg: consider the below xml
after using the RecordReader class the xml output is
the starting and end tag is Emp.
it does not convert into text.
Please suggest an
13201614174
Qdfdhsgdgshagsydyffg
As sfgswzv
Fftdtf
发自我的 iPad
> 在 佛历2557年1月1日,上午6:59,Manoj Samel 写道:
>
> * Name node and secondary name nodes on different machines
> * Kerberos was just enabled
> * Cloudera CDH 4.5 on Centos
>
> Secondary name node log (HOST2) shows following
>
>
> 2013-12-3
Hi,
Is it possible that jobs submitted stay waiting before starting to run?
Is there a command that list the jobs that are submitted and are waiting to
start to run?
--
Thanks,
For hadoop 1.x :
you can refer this for command line
http://hadoop.apache.org/docs/r0.19.0/commands_manual.html#job
Alternatively, you can query jobhistory and jobtracker server to get it via
api as well.
On Fri, Jan 3, 2014 at 6:57 PM, xeon Mailinglist
wrote:
> Hi,
>
> Is it possible that jo
Depends on your scheduler and yes its possible.
On Fri, Jan 3, 2014 at 7:01 PM, Nitin Pawar wrote:
> For hadoop 1.x :
>
> you can refer this for command line
> http://hadoop.apache.org/docs/r0.19.0/commands_manual.html#job
>
> Alternatively, you can query jobhistory and jobtracker server to get
Hi,
Is it possible that jobs submitted stay waiting before starting to run?
Is there a command that list the jobs that are submitted and are waiting
to start to run?
--
Thanks,
My mapred.site xml file is given below. I havent set any
mapred.task.tracker.report.address.
hduser@pc321:/usr/local/hadoop/conf$ vi mapred-site.xml
mapred.job.tracker
pc228:54311
The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a si
I see the default block size for HDFS is 64 MB, is this a value that can be
changed easily?
Yes it can. It is a configurable property. The exact name might differ
depending on the version though.
Read the details here:
https://hadoop.apache.org/docs/current2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
http://books.google.com/books?id=H3mvcxPeUfwC&pg=PA183&lpg=PA183&dq=change+hadoo
Change the dfs.block.size in hdfs-site.xml to be the value you would like
if you want to have all new files have a different block size.
On Fri, Jan 3, 2014 at 11:37 AM, Kurt Moesky wrote:
> I see the default block size for HDFS is 64 MB, is this a value that can
> be changed easily?
>
How to remove one of the slave node. ?
I have a namenode ( master) and 3 datanode (slave) running. I would like to
remove one of the problematic datanode. How can i do this?
Unfortunately i dont have access to that problematic data node.
Thanks
Navaz
Is there a programmatic or HTTP interface for querying the logs of a YARN
application run? Ideally I would start with the AppID, query the AppMaster
log, and then descend into the task logs.
Thanks
John
Hi,
I suggest to use the XPath, this is a native java support for parse xml and
json formats.
For the main problem, like distcp command(
http://hadoop.apache.org/docs/r0.19.0/distcp.pdf ) there is no need of a
reduce function, because you can parse the xml input file and create the
file you need
Gaurav,
[BCCing user@h.a.o to move the thread out of it]
It seems you are missing some step when reconfiguring your cluster in
Cloudera Manager, you should have to modify things by hand in your setup.
Adding the Cloudera Manager user alias.
Thanks.
On Fri, Jan 3, 2014 at 7:11 AM, Gaurav Shank
See:
https://issues.apache.org/jira/browse/YARN-649
https://issues.apache.org/jira/browse/YARN-1524
On Fri, Jan 3, 2014 at 8:50 AM, John Lilley wrote:
> Is there a programmatic or HTTP interface for querying the logs of a
> YARN application run? Ideally I would start with the AppID, query the
yes, i am setting it but still still hangs there..
Also there are no failure logs, it just hangs without erroring out.
Any log location you want me to look at?
On Friday, January 3, 2014 8:38:16 PM UTC+5:30, Gunnar Tapper wrote:
>
> Hi Guarav,
>
> What are you setting HADOOP_CONF_DIR to? IME, you
Hi Guarav,
What are you setting HADOOP_CONF_DIR to? IME, you get the hang if you don't
set it as:
export HADOOP_CONF_DIR=/etc/hadoop/conf.cloudera.yarn1
Gunnar
On Fri, Jan 3, 2014 at 6:47 AM, Gaurav Shankhdhar <
shankhdhar.gau...@gmail.com> wrote:
> Folks,
>
> I am trying to run "teragen" pr
Also note that the block size in recent releases is actually called
"dfs.blocksize" as opposed to "dfs.block.size", and that you can set it per
job as well. In that scenario, just pass it as an argument to your job (e.g.
Hadoop bla -D dfs.blocksize= 134217728)
Regards
From: David Sinclair [
I am running an wordcount example it MRv2, but I get this error in a
Datanode. It looks that it is a problem in the network between the Namenode
and the Datanode, but I am not sure.
What is this error? How can I fix this problem?
2014-01-03 16:46:29,319 INFO
org.apache.hadoop.hdfs.server.datanode
As I am new to hdfs, I was told that the minimize block size is 64M, is it
correct?
XG
在 2014年1月4日,3:12,"German Florez-Larrahondo"
mailto:german...@samsung.com>> 写道:
Also note that the block size in recent releases is actually called
“dfs.blocksize” as opposed to “dfs.block.size”, and that yo
XG,
The newer default is 128 MB [HDFS-4053]. The minimum, however, can be
as low as io.bytes.per.checksum (default: 512 bytes) if the user so
wishes it. To administratively set a limit to prevent low values from
being used, see the config introduced via HDFS-4305.
On Sat, Jan 4, 2014 at 11:38 AM,
24 matches
Mail list logo