Actually I started to play with the latest release 0.23.0 on two nodes
yesterday. And it was easy to start the hdfs. However it took me a while to
configure the yarn. I set the variables HADOOP_COMMON_HOME to where you
extracted the tarball and HADOOP_HDFS_HOME to the local dir where I pointed
Hi I am trying to run the matrix multiplication example mentioned(with source
code) on the following link:
http://www.norstad.org/matrix-multiply/index.html
I have hadoop setup in pseudodistributed mode and I configured it using this
tutorial:
On 30/11/11 04:29, Nitin Khandelwal wrote:
Thanks,
I missed the sbin directory, was using the normal bin directory.
Thanks,
Nitin
On 30 November 2011 09:54, Harsh Jha...@cloudera.com wrote:
Like I wrote earlier, its in the $HADOOP_HOME/sbin directory. Not the
regular bin/ directory.
On Wed,
Friends,
I want to know, how Jobtracker stores information about tasktracker
their tasks?
is it stored in memory or is it stored in file?
If anyone knows it, please let me know it
Thanks Regards,
Mohmmadanis Moulavi
Student,
I'm not sure what's the exact meaning of the tasktracker information you
mentioned
there is a TaskTrackerStatus class, and when the system runs, tt jt
transmits serialized objects of this class which contains some information
through heartbeat
and there is a hashmapstring, TaskTrackerStatus in
Thank you for your help.
I can use /sbin/hadoop-daemon.sh {start|stop} {service} script to start a
namenode, but I can't start a resourcemanager.
2011/11/30 Harsh J ha...@cloudera.com
I simply use the /sbin/hadoop-daemon.sh {start|stop} {service} script
to control daemons at my end.
Does
The error is that you cannot open /tmp/MatrixMultiply/out/_logs
Does the directory exist?
Do you have proper access rights set?
Joep
On Wed, Nov 30, 2011 at 3:23 AM, ChWaqas waqas...@gmail.com wrote:
Hi I am trying to run the matrix multiplication example mentioned(with
source
code) on the
It seems the ClassNotFoundException exception is the most common problem.
Try point HADOOP_COMMON_HOME to HADOOP_HOME/share/hadoop/common.
In my computer it's /usr/bin/hadoop/share/hadoop/common
在 2011年11月30日 下午6:50,hailong.yang1115 hailong.yang1...@gmail.com写道:
Actually I started to play with
For your reading pleasure!
PDF 3.3MB uploaded at (the mailing list has a cap of 1MB attachments):
https://docs.google.com/open?id=0B-zw6KHOtbT4MmRkZWJjYzEtYjI3Ni00NTFjLWE0OGItYTU5OGMxYjc0N2M1
Appreciate if you can spare some time to peruse this little experiment of
mine to use Comics as a
Hi Maneesh,
Thanks a lot for this! Just distributed it over the team and comments are
great :)
Best regards,
Dejan
On Wed, Nov 30, 2011 at 9:28 PM, maneesh varshney mvarsh...@gmail.comwrote:
For your reading pleasure!
PDF 3.3MB uploaded at (the mailing list has a cap of 1MB attachments):
Thanks Maneesh.
Quick question, does a client really need to know Block size and
replication factor - A lot of times client has no control over these (set
at cluster level)
-Prashant Kommireddi
On Wed, Nov 30, 2011 at 12:51 PM, Dejan Menges dejan.men...@gmail.comwrote:
Hi Maneesh,
Thanks a
Hi Prashant
Others may correct me if I am wrong here..
The client (org.apache.hadoop.hdfs.DFSClient) has a knowledge of block size
and replication factor. In the source code, I see the following in the
DFSClient constructor:
defaultBlockSize = conf.getLong(dfs.block.size,
Sure, its just a case of how readers interpret it.
1. Client is required to specify block size and replication factor each
time
2. Client does not need to worry about it since an admin has set the
properties in default configuration files
A client could not be allowed to override the
Maneesh,
Firstly, I love the comic :)
Secondly, I am inclined to agree with Prashant on this latest point. While one
code path could take us through the user defining command line overrides (e.g.
hadoop fs -D blah -put foo bar) I think it might confuse a person new to
Hadoop. The most common
Hi,
This is indeed a good way to explain, most of the improvement has already
been discussed. waiting for sequel of this comic.
Regards,
Abhishek
On Wed, Nov 30, 2011 at 1:55 PM, maneesh varshney mvarsh...@gmail.comwrote:
Hi Matthew
I agree with both you and Prashant. The strip needs to be
Hi all,
very cool comic!
Thanks,
Alex
On Wed, Nov 30, 2011 at 11:58 PM, Abhishek Pratap Singh manu.i...@gmail.com
wrote:
Hi,
This is indeed a good way to explain, most of the improvement has already
been discussed. waiting for sequel of this comic.
Regards,
Abhishek
On Wed, Nov 30,
16 matches
Mail list logo