Re: Problem in viewing WEB UI

2009-06-22 Thread Pankil Doshi
I am not sure but sometimes you might see that datanodes are working from
cmd prompt..
But actually when you look at the logs you find sme kind of error in
that..Check the logs of datanode..

Pankil

On Wed, Jun 17, 2009 at 1:42 AM, ashish pareek pareek...@gmail.com wrote:

 Hi,

  When I run command *bin/hadoop dfsadmin -report *it shows that 2
 datanodes are alive but when i try to http://hadoopmster:50070/ but the
 problem is that it opens doesnot opne
 http://hadoopmaster:50070/dfshealth.jsp page and throws *error HTTP: 404 .
 So why is't happening like this?
 *
 Regards,
 Ashish Pareek


  On Wed, Jun 17, 2009 at 10:06 AM, Sugandha Neaolekar 
 sugandha@gmail.com wrote:

  Well, You just have to specify the address in the URL address bar as::
  http://hadoopmaster:50070 U'll be able to see the web UI..!
 
 
  On Tue, Jun 16, 2009 at 7:17 PM, ashish pareek pareek...@gmail.com
 wrote:
 
  HI Sugandha,
 Hmmm your suggestion helped and Now I am able
  to run two datanode one on the same machine as name node and other
 on
  the different machine Thanks a lot :)
 
   But the problem is now I am not able to see web UI
 .
  for  both datanode and as well as name node
  should I have to consider some more things in the site.xml ? if so
 please
  help...
 
  Thanking you again,
  regards,
  Ashish Pareek.
 
  On Tue, Jun 16, 2009 at 3:10 PM, Sugandha Naolekar 
  sugandha@gmail.com wrote:
 
  hi,,!
 
 
  First of all, get your concepts clear of hadoop.
  You can refer to the following
 
  site::
 
 http://www.google.co.in/url?sa=tsource=webct=rescd=1url=http%3A%2F%2Fwww.michael-noll.com%2Fwiki%2FRunning_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)ei=lGU3Spv2FZbLjAe19KmiDQusg=AFQjCNFbmVGsoChOSMzCB3tRhoV0ylHOzAsig2=t2AJ_nf24SFtveN4PHS_TAhttp://www.google.co.in/url?sa=tsource=webct=rescd=1url=http%3A%2F%2Fwww.michael-noll.com%2Fwiki%2FRunning_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29ei=lGU3Spv2FZbLjAe19KmiDQusg=AFQjCNFbmVGsoChOSMzCB3tRhoV0ylHOzAsig2=t2AJ_nf24SFtveN4PHS_TA
 
 http://www.google.co.in/url?sa=tsource=webct=rescd=1url=http%3A%2F%2Fwww.michael-noll.com%2Fwiki%2FRunning_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29ei=lGU3Spv2FZbLjAe19KmiDQusg=AFQjCNFbmVGsoChOSMzCB3tRhoV0ylHOzAsig2=t2AJ_nf24SFtveN4PHS_TA
 
 
 
  I have small doubt whether in the mater.xml and slave.xml we can have
  same port numbers to both of them like
 
 
  for slave :
 
  property
  namefs.default.name/name
  valuehdfs://hadoopslave:
 
  9000/value
/property
 
 
   for master:::
 
  property
  namefs.default.name/name
  valuehdfs://hadoopmaster:9000/value
/property
 
 
 
  Well, any  two daemons or services can run on the same port unless,
 they
  are not run on the same machine.If you wish to run DN and NN on the
 same
  machine, their port numbers have to be different.
 
 
 
 
  On Tue, Jun 16, 2009 at 2:55 PM, ashish pareek pareek...@gmail.com
 wrote:
 
  HI sugandha,
 
 
 
  and one more thing can we have in slave:::
 
  property
namedfs.datanode.address/
 
  name
valuehadoopmaster:9000/value
  valuehadoopslave:9001/value
/property
 
 
 
  Also, fs,default.name is the tag which specifies the default
 filesystem.
  And generaLLY, it is run on namenode. So, it;s value has to be a
 namenode's
  address only and not slave's.
 
 
 
  Else if you have complete procedure for installing and running Hadoop
 in
  cluster can you please send it to me .. I need to step up hadoop
 with in
  two days and show it to my guide.Currently I am doing my masters.
 
  Thanks for your spending time
 
 
  Try for the above, and this should work!
 
 
 
  regards,
  Ashish Pareek
 
 
  On Tue, Jun 16, 2009 at 2:33 PM, Sugandha Naolekar 
  sugandha@gmail.com wrote:
 
  Following changes are to be done::
 
  Under master folder::
 
  - put slaves address as well under the values of
  tag(dfs.datanode.address)
 
  - You want to make namenode as datanode as well. As per your config
  file, you have specified hadoopmaster  in your slave file. If you
 don't want
  that, remove ti from slaves file.
 
  UNder slave folder::
 
  - put only slave's (the m/c where you intend to run your datanode)'s
  address.under datanode.address tag. Else
  it should go as such::
 
  property
namedfs.datanode.address/name
valuehadoopmaster:9000/value
  valuehadoopslave:9001/value
/property
 
  Also, your port numbers hould be different. the daemons NN,DN,JT,TT
  should run independently on different ports.
 
 
  On Tue, Jun 16, 2009 at 2:05 PM, Sugandha Naolekar 
  sugandha@gmail.com wrote:
 
 
 
  -- Forwarded message --
  From: ashish pareek pareek...@gmail.com
  Date: Tue, Jun 16, 2009 at 2:00 PM
  Subject: Re: org.apache.hadoop.ipc.client : trying connect to server
  failed
  To: Sugandha Naolekar sugandha@gmail.com
 
 
 
 
  On Tue, Jun 16, 2009 at 1:58 PM, ashish pareek pareek...@gmail.com
 wrote

Problem in viewing WEB UI

2009-06-16 Thread ashish pareek
Hi,

  When I run command *bin/hadoop dfsadmin -report *it shows that 2
datanodes are alive but when i try to http://hadoopmster:50070/ but the
problem is that it opens doesnot opne
http://hadoopmaster:50070/dfshealth.jsp page and throws *error HTTP: 404 .
So why is't happening like this?
*
Regards,
Ashish Pareek


 On Wed, Jun 17, 2009 at 10:06 AM, Sugandha Neaolekar 
sugandha@gmail.com wrote:

 Well, You just have to specify the address in the URL address bar as::
 http://hadoopmaster:50070 U'll be able to see the web UI..!


 On Tue, Jun 16, 2009 at 7:17 PM, ashish pareek pareek...@gmail.comwrote:

 HI Sugandha,
Hmmm your suggestion helped and Now I am able
 to run two datanode one on the same machine as name node and other on
 the different machine Thanks a lot :)

  But the problem is now I am not able to see web UI .
 for  both datanode and as well as name node
 should I have to consider some more things in the site.xml ? if so please
 help...

 Thanking you again,
 regards,
 Ashish Pareek.

 On Tue, Jun 16, 2009 at 3:10 PM, Sugandha Naolekar 
 sugandha@gmail.com wrote:

 hi,,!


 First of all, get your concepts clear of hadoop.
 You can refer to the following

 site::
 http://www.google.co.in/url?sa=tsource=webct=rescd=1url=http%3A%2F%2Fwww.michael-noll.com%2Fwiki%2FRunning_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)ei=lGU3Spv2FZbLjAe19KmiDQusg=AFQjCNFbmVGsoChOSMzCB3tRhoV0ylHOzAsig2=t2AJ_nf24SFtveN4PHS_TAhttp://www.google.co.in/url?sa=tsource=webct=rescd=1url=http%3A%2F%2Fwww.michael-noll.com%2Fwiki%2FRunning_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29ei=lGU3Spv2FZbLjAe19KmiDQusg=AFQjCNFbmVGsoChOSMzCB3tRhoV0ylHOzAsig2=t2AJ_nf24SFtveN4PHS_TA


 I have small doubt whether in the mater.xml and slave.xml we can have
 same port numbers to both of them like


 for slave :

 property
 namefs.default.name/name
 valuehdfs://hadoopslave:

 9000/value
   /property


  for master:::

 property
 namefs.default.name/name
 valuehdfs://hadoopmaster:9000/value
   /property



 Well, any  two daemons or services can run on the same port unless, they
 are not run on the same machine.If you wish to run DN and NN on the same
 machine, their port numbers have to be different.




 On Tue, Jun 16, 2009 at 2:55 PM, ashish pareek pareek...@gmail.comwrote:

 HI sugandha,



 and one more thing can we have in slave:::

 property
   namedfs.datanode.address/

 name
   valuehadoopmaster:9000/value
 valuehadoopslave:9001/value
   /property



 Also, fs,default.name is the tag which specifies the default filesystem.
 And generaLLY, it is run on namenode. So, it;s value has to be a namenode's
 address only and not slave's.



 Else if you have complete procedure for installing and running Hadoop in
 cluster can you please send it to me .. I need to step up hadoop with 
 in
 two days and show it to my guide.Currently I am doing my masters.

 Thanks for your spending time


 Try for the above, and this should work!



 regards,
 Ashish Pareek


 On Tue, Jun 16, 2009 at 2:33 PM, Sugandha Naolekar 
 sugandha@gmail.com wrote:

 Following changes are to be done::

 Under master folder::

 - put slaves address as well under the values of
 tag(dfs.datanode.address)

 - You want to make namenode as datanode as well. As per your config
 file, you have specified hadoopmaster  in your slave file. If you don't 
 want
 that, remove ti from slaves file.

 UNder slave folder::

 - put only slave's (the m/c where you intend to run your datanode)'s
 address.under datanode.address tag. Else
 it should go as such::

 property
   namedfs.datanode.address/name
   valuehadoopmaster:9000/value
 valuehadoopslave:9001/value
   /property

 Also, your port numbers hould be different. the daemons NN,DN,JT,TT
 should run independently on different ports.


 On Tue, Jun 16, 2009 at 2:05 PM, Sugandha Naolekar 
 sugandha@gmail.com wrote:



 -- Forwarded message --
 From: ashish pareek pareek...@gmail.com
 Date: Tue, Jun 16, 2009 at 2:00 PM
 Subject: Re: org.apache.hadoop.ipc.client : trying connect to server
 failed
 To: Sugandha Naolekar sugandha@gmail.com




 On Tue, Jun 16, 2009 at 1:58 PM, ashish pareek 
 pareek...@gmail.comwrote:

 HI ,
  I am sending .tar.gz format containing both master and datanode
 config files ...

 Regards,
 Ashish Pareek


 On Tue, Jun 16, 2009 at 1:47 PM, Sugandha Naolekar 
 sugandha@gmail.com wrote:

 can u pls send me a zip or a tar file? I don't have windows systems
 but, linux


 On Tue, Jun 16, 2009 at 1:19 PM, ashish pareek pareek...@gmail.com
  wrote:

 HI Sungandha ,
   Thanks for your reply  I am sending you
 master and slave configuration files if you can go through it and 
 tell me
 where I am going wrong it would be helpful.

 Hope to get a reply soon ... Thanks
 again!

 Regards,
 Ashish Pareek

 On Tue, Jun 16

Re: Web ui

2009-04-08 Thread Rasit OZDAS
@Nick, I'm using ajax very often and previously done projects with ZK
and JQuery, I can easily say that GWT was the easiest of them.
Javascript is only needed where core features aren't enough. I can
easily assume that we won't need any inline javascript.

@Philip,
Thanks for the point. That is a better solution than I imagine,
actually, and I won't have to wait since it's a resolved issue.

-- 
M. Raşit ÖZDAŞ


Web ui

2009-04-07 Thread Rasit OZDAS
Hi,

I started to write my own web ui with GWT. With GWT I can manage
everything within one page, I can set refreshing durations for
each part of the page. And also a better look and feel with the help
of GWT styling.

But I can't get references of NameNode and JobTracker instances.
I found out that they're sent to web ui as application parameters when
hadoop initializes.

I'll try to contribute gui part of my project to hadoop source, if you
want, no problem.
But I need static references to namenode and jobtracker for this.

And I think it will be useful for everyone like me.

M. Rasit OZDAS


Re: Web ui

2009-04-07 Thread Nick Cen
Hi Rasit,

I have do a little bit research in GWT previously, and i think that
framework is a little bit difficult to use in ajax development cause he use
its own data format and the js has to be placed in the comment.
Have you consider make the server side just output pure json/xml data , and
let the client site pickup whatever it want.


2009/4/7 Rasit OZDAS rasitoz...@gmail.com

 Hi,

 I started to write my own web ui with GWT. With GWT I can manage
 everything within one page, I can set refreshing durations for
 each part of the page. And also a better look and feel with the help
 of GWT styling.

 But I can't get references of NameNode and JobTracker instances.
 I found out that they're sent to web ui as application parameters when
 hadoop initializes.

 I'll try to contribute gui part of my project to hadoop source, if you
 want, no problem.
 But I need static references to namenode and jobtracker for this.

 And I think it will be useful for everyone like me.

 M. Rasit OZDAS




-- 
http://daily.appspot.com/food/


Re: Web ui

2009-04-07 Thread Philip Zeyliger
On Tue, Apr 7, 2009 at 5:11 AM, Rasit OZDAS rasitoz...@gmail.com wrote:

 Hi,

 I started to write my own web ui with GWT. With GWT I can manage
 everything within one page, I can set refreshing durations for
 each part of the page. And also a better look and feel with the help
 of GWT styling.

 But I can't get references of NameNode and JobTracker instances.
 I found out that they're sent to web ui as application parameters when
 hadoop initializes.


https://issues.apache.org/jira/browse/HADOOP-5257 proposes a plugin layer
for Hadoop daemons.  Might be useful for your application.


Re: Hostnames on MapReduce Web UI

2009-02-15 Thread Nick Cen
Try comment out te localhost definition in your /etc/hosts file.

2009/2/14 S D sd.codewarr...@gmail.com

 I'm reviewing the task trackers on the web interface (
 http://jobtracker-hostname:50030/) for my cluster of 3 machines. The names
 of the task trackers do not list real domain names; e.g., one of the task
 trackers is listed as:

 tracker_localhost:localhost/127.0.0.1:48167

 I believe that the networking on my machines is set correctly. What do I
 need to configure so that the listing above will show the actual domain
 name? This will help me in diagnosing where problems are occurring in my
 cluster. Note that at the top of the page the hostname (in my case storm)
 is properly listed; e.g.,

 storm Hadoop Machine List

 Thanks,
 John




-- 
http://daily.appspot.com/food/


Re: Hostnames on MapReduce Web UI

2009-02-15 Thread S D
Thanks, this did it. I changed my /etc/hosts file on each node from
 127.0.0.1 localhost localhost.localdomain
 127.0.0.1 hostname
to just switch the order with
 127.0.0.1 hostname
 127.0.0.1 localhost localhost.localdomain
This did the trick! I vaguely recall from somewhere that I need the
localhost localhost.localdomain line so I thought I better avoid removing it
altogether.

Thanks,
John



On Sun, Feb 15, 2009 at 10:38 AM, Nick Cen cenyo...@gmail.com wrote:

 Try comment out te localhost definition in your /etc/hosts file.

 2009/2/14 S D sd.codewarr...@gmail.com

  I'm reviewing the task trackers on the web interface (
  http://jobtracker-hostname:50030/) for my cluster of 3 machines. The
 names
  of the task trackers do not list real domain names; e.g., one of the task
  trackers is listed as:
 
  tracker_localhost:localhost/127.0.0.1:48167
 
  I believe that the networking on my machines is set correctly. What do I
  need to configure so that the listing above will show the actual domain
  name? This will help me in diagnosing where problems are occurring in my
  cluster. Note that at the top of the page the hostname (in my case
 storm)
  is properly listed; e.g.,
 
  storm Hadoop Machine List
 
  Thanks,
  John
 



 --
 http://daily.appspot.com/food/



Hive Web-UI

2008-10-10 Thread Edward Capriolo
I was checking out this slide show.
http://www.slideshare.net/jhammerb/2008-ur-tech-talk-zshao-presentation/
in the diagram a Web-UI exists. This was the first I have heard of
this. Is this part of or planned to be a part of contrib/hive? I think
a web interface for showing table schema and executing jobs would be
very interesting. Is anyone working on something like this? If not, I
have a few ideas.


Fwd: Hive Web-UI

2008-10-10 Thread Jeff Hammerbacher
Hey Edward,

The UI mentioned in those slides leverages many internal display
libraries from Facebook. If you wanted to make a UI that leverages the
metastore's thrift interface but only uses open source display
libraries, I think it would definitely be appreciated by Hive users.

Thanks,
Jeff


-- Forwarded message --
From: Edward Capriolo [EMAIL PROTECTED]
Date: Fri, Oct 10, 2008 at 10:13 AM
Subject: Hive Web-UI
To: core-user@hadoop.apache.org


I was checking out this slide show.
http://www.slideshare.net/jhammerb/2008-ur-tech-talk-zshao-presentation/
in the diagram a Web-UI exists. This was the first I have heard of
this. Is this part of or planned to be a part of contrib/hive? I think
a web interface for showing table schema and executing jobs would be
very interesting. Is anyone working on something like this? If not, I
have a few ideas.


which param in the hadoop-default.xml desides the port of Web UI?

2008-07-30 Thread wangxiaowei
Dear All,
  I use hadoop-0.17.0.I want to observe the progress of the Job through Web ui 
.but I cann`t find the right param in the hadoop-default.xml to deside the port 
and address.
  I know the Hadoop-0.15.3 is dfs.info.port is the base port number for the dfs 
namenode web ui.What about hadoop-0.17.0?
  Thanks.

Re: which param in the hadoop-default.xml desides the port of Web UI?

2008-07-30 Thread Xuebing Yan
mapred.job.tracker.http.address does.

property
  namemapred.job.tracker.http.address/name
  value0.0.0.0:50030/value
  description
The job tracker http server address and port the server will listen
on.
If the port is 0 then the server will start on a free port.
  /description
/property

-Xuebing

On Wed, 2008-07-30 at 17:03 +0800, wangxiaowei wrote:
 Dear All,
   I use hadoop-0.17.0.I want to observe the progress of the Job through Web 
 ui .but I cann`t find the right param in the hadoop-default.xml to deside the 
 port and address.
   I know the Hadoop-0.15.3 is dfs.info.port is the base port number for the 
 dfs namenode web ui.What about hadoop-0.17.0?
   Thanks.