Hi,
Does anyone know where can I download the Hadoop HTML docs? The online
version is here,
http://hadoop.apache.org/docs/
What I want is exactly those HTML pages.
Is there a repository for docs (the code repo does not contain these docs)?
Thanks!
~t
You can find the docs in the release binary tar. Downloading them from
Apache mirrors.
On Tue, Apr 8, 2014 at 4:16 PM, Tianyin Xu t...@cs.ucsd.edu wrote:
Hi,
Does anyone know where can I download the Hadoop HTML docs? The online
version is here,
http://hadoop.apache.org/docs/
What I
I want to look at the default rack configuration,so I use following command
like books says:
[hadoop@master sbin]$ hadoop fsck -rack
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
14/04/08 01:22:56 WARN util.NativeCodeLoader: Unable to
I wrote some data into a file using MultipleOutputs in mappers. I can see
the contents in this file using hadoop fs -ls file, but its size is
reported as zero by the command hadoop fs -du file or hadoop fs -ls
file , as follows:
-rw-r--r-- 3 hadoop hadoop 0 2014-04-07 22:06
awesome!!! Thanks a lot, Gordon!
(I realized I never used a binary version LoL)
~t
On Tue, Apr 8, 2014 at 1:23 AM, Gordon Wang gw...@gopivotal.com wrote:
You can find the docs in the release binary tar. Downloading them from
Apache mirrors.
On Tue, Apr 8, 2014 at 4:16 PM, Tianyin Xu
On Tue, Apr 8, 2014 at 4:26 PM, EdwardKing zhan...@neusoft.com wrote:
hadoop fsck -rack
You gave invalid command argument to the fsck command. The correct command
is hadoop fsck / -racks
Run command hadoop fsck to see the full help message.
--
Cheers
-MJ
simple yarn application hangs on sandbox on second run for multiple instance.
code : https://github.com/hortonworks/simple-yarn-app
command : hadoop jar
/usr/lib/hadoop-yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.2.0.6.0-76.jar
Client -classpath
As per the given exception stack trace, it is trying to use local file system.
Can you check whether you have configured the file system configurations with
HDFS?
Thanks
Devaraj K
From: divye sheth [mailto:divs.sh...@gmail.com]
Sent: Tuesday, April 08, 2014 5:37 PM
To: user@hadoop.apache.org
If you didn't close the file correctly then NameNode wouldn't be notified
of the final size of the file. The file size is meta-data coming from
NameNode.
On Tue, Apr 8, 2014 at 4:35 AM, Tao Xiao xiaotao.cs@gmail.com wrote:
I wrote some data into a file using MultipleOutputs in mappers. I
My mapper code is as follows and I don't know whether any file is not
closed correctly.
public class TheMapper extends MapperLongWritable, Text, Text,
NullWritable {
private MultipleOutputsText, NullWritable outputs;
@Override
protected void setup(Context ctx) {
outputs = new
Hi there,
I've got three node setup with hadoop-1.0.4, one as master and slave, the
two as slaves. Running jps shows the datanode etc. are all running
properly. Passwordless SSH from the master to other machine is also OK.
I tried to run the wordcount example,
Hi,
I saw that pretty much after sending the email. I verified the properties
file and it has all the correct properties even the
mapred.framework.nameis set to yarn. I am unable to figure out what is
the cause and why it is
connecting to local FS.
Using the same configuration file I am able to
Hi all,
I backported SnapshotInputFormat to hbase 0.94. This resulted in a huge
performance gain while the scan is running. Here is the corresponding jira
https://issues.apache.org/jira/browse/HBASE-8369. The overall performance
gain is mitigated by a very long initialization period. It can take
Hi Rohith,
Thanks for the reply.
Mine is a YARN application. I have some files that are local to where the
containers run on, and I want to clean them up at the end of the container
execution. So, I want to do this cleanup on the same node my container ran
on. With what you are suggesting,
Have you looked at HBASE-10642 which was integrated to 0.94.18 ?
Cheers
On Tue, Apr 8, 2014 at 8:19 AM, David Quigley dquigle...@gmail.com wrote:
Hi all,
I backported SnapshotInputFormat to hbase 0.94. This resulted in a huge
performance gain while the scan is running. Here is the
No haven't seen that, I think I backported just before that jira was
created. Thanks! ill check it out
On Tue, Apr 8, 2014 at 7:45 AM, Ted Yu yuzhih...@gmail.com wrote:
Have you looked at HBASE-10642 which was integrated to 0.94.18 ?
Cheers
On Tue, Apr 8, 2014 at 8:19 AM, David Quigley
Hi,
Does this JIRA issue mean that we can't currently reuse a container for
running/launching two different processes one after another?
https://issues.apache.org/jira/browse/YARN-373
If that is true, are there any plans for making that possible?
Thanks,
Kishore
Hi Deveraj,
I went through multiple links all asking me to check if the
mapreduce.framework.name is set to yarn, it is along with proper pointing
to the Namenode i.e. fs.defaultFS.
But still it tries to connect to the local. I am not sure what to do,
please help me out with some pointers as I am
Dear All,
I was wondering if the following is possible using MapReduce.
I would like to create a job that loops over a bunch of documents,
tokenizes them into ngrams, and stores the ngrams and not only the counts
of ngrams but also _which_ document(s) had this particular ngram. In other
If you have set fs.defaultFS configuration with hdfs, it should use the HDFS.
And also please make sure that you have updated the Hadoop and the dependency
jar files in the client side with the Hadoop 2.2.0 jars.
Thanks
Devaraj K
From: divye sheth [mailto:divs.sh...@gmail.com]
Sent: Tuesday,
Yes, you can write custom writable classes that detail and serialise
your required data structure. If you have Hadoop: The Definitive
Guide, checkout its section Serialization under chapter Hadoop
I/O.
On Tue, Apr 8, 2014 at 9:16 PM, Natalia Connolly
natalia.v.conno...@gmail.com wrote:
Dear All,
- Adding parsing logic in mappers/reducers is the simplest, least elegant
way to do it, or just writing json strings is one simple way to do it.
- You get more advanced by writing custom writables which parse the data
are the first way to do it.
- The truly portable and right way is to do it is
For local container clean up, can be cleaned at ShutDownHook. !!??
Thanks Regards
Rohith Sharma K S
From: Krishna Kishore Bonagiri [mailto:write2kish...@gmail.com]
Sent: 08 April 2014 20:01
To: user@hadoop.apache.org
Subject: Re: Cleanup activity on YARN containers
Hi Rohith,
Thanks for
Hi Rohith,
Is there something like shutdown hook for containers? Can you please also
tell me how to use that?
Thanks,
Kishore
On Wed, Apr 9, 2014 at 8:34 AM, Rohith Sharma K S rohithsharm...@huawei.com
wrote:
For local container clean up, can be cleaned at ShutDownHook. !!??
Thanks
Is there something like shutdown hook for containers?
There is no containers specific shutdown hook.
I was telling about Java shutdown hook i.e
'Runtime.getRuntime().addShutdownHook(Threadhttp://docs.oracle.com/javase/7/docs/api/java/lang/Thread.html
hook)' during start of container JVM. In
25 matches
Mail list logo