Re: Lucene Example error

2011-12-22 Thread Harsh J
This does not look related to Apache Hadoop. Perhaps you want to post this to the lucene's user lists? On Fri, Dec 23, 2011 at 12:24 PM, Srinivas Surasani wrote: > When I am trying to run lucene example from command line, I get the > following exception. > > [root@localhost lucene-3.4.0]# echo $C

Lucene Example error

2011-12-22 Thread Srinivas Surasani
When I am trying to run lucene example from command line, I get the following exception. [root@localhost lucene-3.4.0]# echo $CLASSPATH /root/lucene-3.4.0/lucene-core-3.4.0.jar:/root/lucene-3.4.0/contrib/demo/lucene-demo-3.4.0.jar:/usr/java/jdk1.6.0_26/lib [root@localhost lucene-3.4.0]# export JAV

Re: java.net.ConnectException: Connection refused

2011-12-22 Thread Harsh J
Your NameNode is listening on localhost. In HDFS, your regular client needs to simply communicate with a NameNode address to do its operations -- you do not need to explicitly connect to a particular DataNode. Also, maybe as you grow your cluster you want to switch out fs.default.name from localho

Re: java.net.ConnectException: Connection refused

2011-12-22 Thread Joey Krabacher
If you are just trying to list the HDFS content then just use: bin/hadoop fs -ls See if that command get you what you're looking for. --Joey On Thu, Dec 22, 2011 at 8:45 PM, warren wrote: > hi everyone > > I useing hadoop-0.20.203.0,one master and one slave. > > I can see one live datanode at:

Re: Re: DN limit

2011-12-22 Thread bourne1900
Sorry, a detailed description: I wanna know how many files a datanode can hold, so there is only one datanode in my cluster. When the datanode save 14million files, the cluster can't work, and the datanode has used all of it's MEM(32G), the namenode's MEM is OK. Bourne Sender: Adrian Liu Dat

Re: DN limit

2011-12-22 Thread Adrian Liu
In my understanding, the max number of files stored in the HDFS should be /sizeof(inode struct). This max number of HDFS files should be no smaller than max files a datanode can hold. Please feel free to correct me because I'm just beginning learning hadoop. On Dec 23, 2011, at 10:35 AM, bourn

java.net.ConnectException: Connection refused

2011-12-22 Thread warren
hi everyone I useing hadoop-0.20.203.0,one master and one slave. I can see one live datanode at: http://localhost:50070/dfshealth.jsp when I type:bin/hadoop dfs -fs datanode1 -ls / I found that: 11/12/23 10:44:21 WARN fs.FileSystem: "datanode1" is a deprecated filesystem name. Use "hdfs://da

Re: Hadoop configuration

2011-12-22 Thread Humayun kabir
Hello Uma, Thanks for your cordial and quick reply. It would be great if you explain what you suggested to do. Right now we are running on following configuration. We are using hadoop on virtual box. when it is a single node then it works fine for big dataset larger than the default block size. b

RE: Automate Hadoop installation

2011-12-22 Thread Tom Wilcox
I'm interested in this too. You might find this article helpful: http://hstack.org/hstack-automated-deployment-using-puppet/ Apparently the Adobe SaaS guys are responsible for creating a project for puppet-Hadoop stack deployments... -Original Message- From: warren [mailto:hadoop.com..

Re: measuring network throughput

2011-12-22 Thread alo alt
Yes, I know ;) You can grab and extend the metrics as you like. Here a post from Sematext: http://blog.sematext.com/2011/07/31/extending-hadoop-metrics/ - Alex On Thu, Dec 22, 2011 at 2:45 PM, Rita wrote: > Yes, I think they can graph it for you. However, I am looking for raw data > because I wo

Re: measuring network throughput

2011-12-22 Thread Rita
Yes, I think they can graph it for you. However, I am looking for raw data because I would like to create something custom On Thu, Dec 22, 2011 at 8:19 AM, alo alt wrote: > Rita, > > ganglia give you a throughput like Nagios. Could that help? > > - Alex > > On Thu, Dec 22, 2011 at 1:58 PM, Rit

Re: Automate Hadoop installation

2011-12-22 Thread warren
if you use ubuntu,you can try ubuntu-orchestra-modules-hadoop On 2011年12月06日 03:20, Konstantin Boudnik wrote: These that great project called BigTop (in the apache incubator) which provides for building of Hadoop stack. The part of what it provides is a set of Puppet recipes which will allow yo

Re: measuring network throughput

2011-12-22 Thread alo alt
Rita, ganglia give you a throughput like Nagios. Could that help? - Alex On Thu, Dec 22, 2011 at 1:58 PM, Rita wrote: > Is there a tool or a method to measure the throughput of the cluster at a > given time? It would be a great feature to add > > > > > > -- > --- Get your facts first, then you

measuring network throughput

2011-12-22 Thread Rita
Is there a tool or a method to measure the throughput of the cluster at a given time? It would be a great feature to add -- --- Get your facts first, then you can distort them as you please.--

Concurrent calls to waitForCompletion cause ClassCastException

2011-12-22 Thread dnspies
When running a program which runs concurrent map-reduce jobs, I get a ClassCastException if I use waitForCompletion. (Not with submit, but I do need the threads to wait for the jobs to complete. Is there a way to do that with submit?) This example http://old.nabble.com/file/p33022696/Main.java

RE: Hadoop configuration

2011-12-22 Thread Uma Maheswara Rao G
Hey Humayun, To solve the too many fetch failures problem, you should configure host mapping correctly. Each tasktracker should be able to ping from each other. Regards, Uma From: Humayun kabir [humayun0...@gmail.com] Sent: Thursday, December 22, 2011 2:

Re: Hadoop configuration

2011-12-22 Thread alo alt
Hi, Apache: http://hadoop.apache.org/common/docs/current/cluster_setup.html RHEL / CentOS: http://mapredit.blogspot.com/p/get-hadoop-cluster-running-in-20.html Ubuntu: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ - Alex On Thu, Dec 22, 2011 at 10:24

Re: Hadoop configuration

2011-12-22 Thread raghavendhra rahul
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ This is the easiest guide to configure hadoop. On Thu, Dec 22, 2011 at 2:54 PM, Humayun kabir wrote: > someone please help me to configure hadoop such as core-site.xml, > hdfs-site.xml, mapred-site.xml etc. >

Hadoop configuration

2011-12-22 Thread Humayun kabir
someone please help me to configure hadoop such as core-site.xml, hdfs-site.xml, mapred-site.xml etc. please provide some example. it is badly needed. because i run in a 2 node cluster. when i run the wordcount example then it gives the result too mutch fetch failure.

Configurable keys for MapReduce

2011-12-22 Thread dnspies
Most places throughout the hadoop framework instantiate instances of the keys with ReflectionUtils.newInstance(keyClass, configuration) I thought that means I can configure the keys before they read or write, but apparently this is not so. The attached example http://old.nabble.com/file/p330222