Krish,
What is the exact error you get in the browser when accessing nn using the
hostname?
Make sure that the hosts file of the machine that is running the browser can
resolve the hostname.
Sent from Samsung tablet
Original message
From: Krish Donald gotomyp...@gmail.com
Hi
If you click the maps from jt webui you will move to map task list. From
there if you click any task it will lead to next level where you can see
the jobs host and input split location. If it is same host and input
location then that is local map task.
Thanks
Abirami
On Jan 14, 2015 12:04
[hadoop@mynamenode ~]$ chkconfig --list |grep -i iptables
iptables0:off 1:off 2:off 3:off 4:off 5:off 6:off
[hadoop@mynamenode ~]$ sestatus
SELinux status: disabled
On Wed, Jan 14, 2015 at 12:27 PM, johny casanova pcgamer2...@outlook.com
wrote:
Did you made
Fei:
You can watch this issue:
HDFS-7613 Block placement policy for erasure coding groups
the solution there would be helpful to us.
Cheers
On Wed, Jan 14, 2015 at 11:04 AM, Fei Hu hufe...@gmail.com wrote:
Thank you for your quick response.
After reading the materials you recommended, my
Hi,
Thank you for your help.
I searched HDFS-7613 by Google and the link
https://issues.apache.org/jira/issues/?jql=project%20%3D%20HDFS%20AND%20text%20~%20%227613%22
https://issues.apache.org/jira/issues/?jql=project%20=%20HDFS%20AND%20text%20~%20%227613%22,
but I could not find it.
Could
Thank you very much.
Fei
On Jan 14, 2015, at 7:36 PM, Ted Yu yuzhih...@gmail.com wrote:
https://issues.apache.org/jira/browse/HDFS-7613
https://issues.apache.org/jira/browse/HDFS-7613
Cheers
On Wed, Jan 14, 2015 at 4:35 PM, Fei Hu hufe...@gmail.com
mailto:hufe...@gmail.com wrote:
https://issues.apache.org/jira/browse/HDFS-7613
Cheers
On Wed, Jan 14, 2015 at 4:35 PM, Fei Hu hufe...@gmail.com wrote:
Hi,
Thank you for your help.
I searched HDFS-7613 by Google and the link
https://issues.apache.org/jira/issues/?jql=project%20%3D%20HDFS%20AND%20text%20~%20%227613%22
Hi
In YARN, shuffle and sort is pluggable:
http://hadoop.apache.org/docs/r2.5.2/hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html
Currently, shuffle is based on sort. But many of my mapreduce jobs do not
need sort.
To improve performance, maybe it is
In KNN like algorithm we need to load model Data into cache for predicting
the records.
Here is the example for KNN.
[image: Inline image 1]
So if the model will be a large file say1 or 2 GB we will be able to load
them into Distributed cache.
The one way is to split/partition the model
The program will be used in product environment.
Does you mean that the program must be deployed on any node of the cluster?
I have some experience in operating database, I can query/edit/add/remove
data on the OS witch the database installed on, or operate from the other
machine remotely. Can I
Yes, One of my friend is implemeting the same. I know global sharing of
Data is not possible across Hadoop MapReduce. But I need to check if that
can be done somehow in hadoop Mapreduce also. Because I found some papers
in KNN hadoop also.
And I trying to compare the performance too.
Hope some
have you considered implementing using something like spark? That could be
much easier than raw map-reduce
On Wed, Jan 14, 2015 at 10:06 PM, unmesha sreeveni unmeshab...@gmail.com
wrote:
In KNN like algorithm we need to load model Data into cache for predicting
the records.
Here is the
Hi Johny,
I have all the entries for my namenode and datanode in the below format in
/etc/hosts
ipaddress hostname fully_qualified_hostname
But still I am not sure why I am facing this issue.
Thanks
Krish
On Tue, Jan 13, 2015 at 9:11 PM, johny casanova pcgamer2...@outlook.com
wrote:
Sorry, typo wrong.
2015-01-04 16:36 GMT+08:00 Daniel Jankovic play...@gmail.com:
resubscribe :)
On Wed, Dec 31, 2014 at 8:22 AM, Ted Yu yuzhih...@gmail.com wrote:
Please see http://hadoop.apache.org/mailing_lists.html#User
Cheers
On Dec 30, 2014, at 11:17 PM, Zarey Chang
Hi,
I write some mapreduce code in my project *my_prj*. *my_prj *will be
deployed on the machine which is not a node of the cluster.
how does *my_prj* to run a mapreduce job in this case?
thank you!
Best Regards,
Iridium
Your data wont get splitted. so your program runs as single mapper and
single reducer. And your intermediate data is not shuffeld and sorted, But
u can use this for debuging
On Jan 14, 2015 2:04 PM, Cao Yi iridium...@gmail.com wrote:
Hi,
I write some mapreduce code in my project *my_prj*.
16 matches
Mail list logo