This does not look related to Apache Hadoop. Perhaps you want to post
this to the lucene's user lists?
On Fri, Dec 23, 2011 at 12:24 PM, Srinivas Surasani wrote:
> When I am trying to run lucene example from command line, I get the
> following exception.
>
> [root@localhost lucene-3.4.0]# echo $C
When I am trying to run lucene example from command line, I get the
following exception.
[root@localhost lucene-3.4.0]# echo $CLASSPATH
/root/lucene-3.4.0/lucene-core-3.4.0.jar:/root/lucene-3.4.0/contrib/demo/lucene-demo-3.4.0.jar:/usr/java/jdk1.6.0_26/lib
[root@localhost lucene-3.4.0]# export JAV
Your NameNode is listening on localhost. In HDFS, your regular client
needs to simply communicate with a NameNode address to do its
operations -- you do not need to explicitly connect to a particular
DataNode.
Also, maybe as you grow your cluster you want to switch out
fs.default.name from localho
If you are just trying to list the HDFS content then just use:
bin/hadoop fs -ls
See if that command get you what you're looking for.
--Joey
On Thu, Dec 22, 2011 at 8:45 PM, warren wrote:
> hi everyone
>
> I useing hadoop-0.20.203.0,one master and one slave.
>
> I can see one live datanode at:
Sorry, a detailed description:
I wanna know how many files a datanode can hold, so there is only one datanode
in my cluster.
When the datanode save 14million files, the cluster can't work, and the
datanode has used all of it's MEM(32G), the namenode's MEM is OK.
Bourne
Sender: Adrian Liu
Dat
In my understanding, the max number of files stored in the HDFS should be /sizeof(inode struct). This max number of HDFS files should be no
smaller than max files a datanode can hold.
Please feel free to correct me because I'm just beginning learning hadoop.
On Dec 23, 2011, at 10:35 AM, bourn
hi everyone
I useing hadoop-0.20.203.0,one master and one slave.
I can see one live datanode at: http://localhost:50070/dfshealth.jsp
when I type:bin/hadoop dfs -fs datanode1 -ls /
I found that:
11/12/23 10:44:21 WARN fs.FileSystem: "datanode1" is a deprecated
filesystem name. Use "hdfs://da
Hello Uma,
Thanks for your cordial and quick reply. It would be great if you explain
what you suggested to do. Right now we are running on following
configuration.
We are using hadoop on virtual box. when it is a single node then it works
fine for big dataset larger than the default block size. b
I'm interested in this too.
You might find this article helpful:
http://hstack.org/hstack-automated-deployment-using-puppet/
Apparently the Adobe SaaS guys are responsible for creating a project for
puppet-Hadoop stack deployments...
-Original Message-
From: warren [mailto:hadoop.com..
Yes, I know ;) You can grab and extend the metrics as you like. Here a
post from Sematext:
http://blog.sematext.com/2011/07/31/extending-hadoop-metrics/
- Alex
On Thu, Dec 22, 2011 at 2:45 PM, Rita wrote:
> Yes, I think they can graph it for you. However, I am looking for raw data
> because I wo
Yes, I think they can graph it for you. However, I am looking for raw data
because I would like to create something custom
On Thu, Dec 22, 2011 at 8:19 AM, alo alt wrote:
> Rita,
>
> ganglia give you a throughput like Nagios. Could that help?
>
> - Alex
>
> On Thu, Dec 22, 2011 at 1:58 PM, Rit
if you use ubuntu,you can try ubuntu-orchestra-modules-hadoop
On 2011年12月06日 03:20, Konstantin Boudnik wrote:
These that great project called BigTop (in the apache incubator) which
provides for building of Hadoop stack.
The part of what it provides is a set of Puppet recipes which will allow yo
Rita,
ganglia give you a throughput like Nagios. Could that help?
- Alex
On Thu, Dec 22, 2011 at 1:58 PM, Rita wrote:
> Is there a tool or a method to measure the throughput of the cluster at a
> given time? It would be a great feature to add
>
>
>
>
>
> --
> --- Get your facts first, then you
Is there a tool or a method to measure the throughput of the cluster at a
given time? It would be a great feature to add
--
--- Get your facts first, then you can distort them as you please.--
When running a program which runs concurrent map-reduce jobs, I get a
ClassCastException if I use waitForCompletion. (Not with submit, but I do
need the threads to wait for the jobs to complete. Is there a way to do
that with submit?)
This example http://old.nabble.com/file/p33022696/Main.java
Hey Humayun,
To solve the too many fetch failures problem, you should configure host
mapping correctly.
Each tasktracker should be able to ping from each other.
Regards,
Uma
From: Humayun kabir [humayun0...@gmail.com]
Sent: Thursday, December 22, 2011 2:
Hi,
Apache:
http://hadoop.apache.org/common/docs/current/cluster_setup.html
RHEL / CentOS:
http://mapredit.blogspot.com/p/get-hadoop-cluster-running-in-20.html
Ubuntu:
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
- Alex
On Thu, Dec 22, 2011 at 10:24
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
This is the easiest guide to configure hadoop.
On Thu, Dec 22, 2011 at 2:54 PM, Humayun kabir wrote:
> someone please help me to configure hadoop such as core-site.xml,
> hdfs-site.xml, mapred-site.xml etc.
>
someone please help me to configure hadoop such as core-site.xml,
hdfs-site.xml, mapred-site.xml etc.
please provide some example. it is badly needed. because i run in a 2 node
cluster. when i run the wordcount example then it gives the result too
mutch fetch failure.
Most places throughout the hadoop framework instantiate instances of the keys
with ReflectionUtils.newInstance(keyClass, configuration)
I thought that means I can configure the keys before they read or write, but
apparently this is not so. The attached example
http://old.nabble.com/file/p330222
20 matches
Mail list logo