On Mon, Oct 12, 2009 at 5:56 AM, Martin Hall mart...@karmasphere.com wrote:
The license on the beta product is open-ended and we're committed to a free
version of the product in final release with at least as much functionality
as you see in the product today.
We're a business and have to
seams no datanode is coming~~no block is reported, so hdfs is in safemode
check if datanode is up, and network between namenode and datanode is fine~~
2009/10/13 yibo820217 yibo0...@gmail.com
hi,recently,i got some problem.
at first,I start the hadoop
#bin/start-all.sh
then,I mount hdfs to
Hi.
Any idea about having replication value at 2?
Was this fixed in the patches for 0.18.3, and if yes, which patch is this?
Thanks.
On Thu, Aug 27, 2009 at 8:18 PM, Stas Oskin stas.os...@gmail.com wrote:
Hi.
Following on this issue, any idea if all the bugs were worked out in 0.20,
with
Hi all,
I have just done a fresh install of hadoop-0.20.1 on a small cluster
and can't get it to start up.
Could someone please help me diagnose where I might be going wrong?
Below are the snippets of logs from the namenode, a datanode and a
tasktrasker.
I have successfully formated the
Hi,
We are trying to set up a cluster (starting with 2 machines) using the
new 0.20.1 version.
On the master machine, just after the server starts, the name node
dies off with the following exception:
2009-10-13 01:22:24,740 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
Hi all,
I posted below three topics:
NTT focuses on the social infrastructure with clouds
A major common paper ASAHI talked the about cloud
NetWorld will dive into the cloud market with Bplats
http://jclouds.wordpress.com/
Thanks,
/mikio uzawa
Hi Tejas,
I just upgraded to 20.1 as well and you config all looks the same as mine
except in the core-site.xml I have:
configuration
property
namefs.default.name/name
valuehdfs://localhost:9000/value
/property
/configuration
Maybe you need to add the port on yours. I haven't seen
I think you should edit the core-site.xml .
(master and slave machine)
-- core-site.xml ---
configuration
property
namefs.default.name/name
valuehdfs://master_hadoop: 54310
/value
/property
property
namehadoop.tmp.dir/name
value/opt/hadoop-0.20.1/tmp/value
/property
I am using the 0.3 Cloudera scripts to start a Hadoop cluster on EC2 of
11 c1.xlarge instances (1 master, 10 slaves), that is the biggest
instance available with 20 compute units and 4x 400gb disks.
I wrote some scripts to test many (100's) of configurations running a
particular Hive query to
I think you need to specify the port as well for following port
property
namefs.default.name/name
valuehdfs://master_hadoop/value
/property
On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar t...@umbc.edu wrote:
Hi,
We are trying to set up a cluster (starting with 2 machines) using the new
did you verify the name resolution?
On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar t...@umbc.edu wrote:
I get the same error even if i specify the port number. I have tried with
port numbers 54310 as well as 9000.
Regards,
Tejas
On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
By name resolution, I assume that you mean the name mentioned in /etc/
hosts. Yes, in the logs, the IP address appears in the beginning.
Correct me if I'm wrong
I will also try with using just the IP's instead of the aliases.
On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:
did you verify
Hey Kevin,
You were right...
I changed all my aliases to IP addresses. It worked !
Thank you all again :)
Regards,
Tejas
On Oct 13, 2009, at 12:41 PM, Tejas Lagvankar wrote:
By name resolution, I assume that you mean the name mentioned in /
etc/hosts. Yes, in the logs, the IP address
Is there a Bay Area (Silicon Valley) user group that gets together?
Thanks!
--- On Thu, 10/8/09, Lalit Kapoor lalitkap...@gmail.com wrote:
From: Lalit Kapoor lalitkap...@gmail.com
Subject: DC Hadoop Users Group Meetup - October 16th, 2009 6:30 PM
To: common-user@hadoop.apache.org
Date:
Hey Rebecca,
Yes, see http://www.meetup.com/hadoop/.
Regards,
Jeff
On Tue, Oct 13, 2009 at 3:09 PM, Rebecca Owen rebeccaow...@yahoo.comwrote:
Is there a Bay Area (Silicon Valley) user group that gets together?
Thanks!
--- On Thu, 10/8/09, Lalit Kapoor lalitkap...@gmail.com wrote:
From:
Hi,
I run Elastic MapReduce. The output of my application is a text file, where
each line is essentially a set of fields. It will fit very nicely into a
simple database, but which database
1. Is persistent after cluster shutdown;
2. Can be written to by many reducers?
Amazon SimpleDB could
You can put into Hbase. Or you can use the DBOutputFormat and interface with
an RDBMS.
Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz
On Tue, Oct 13, 2009 at 3:12 PM, Mark Kerzner markkerz...@gmail.com wrote:
Hi,
I run Elastic MapReduce. The output
Hey Mark,
You will probably get some mileage from
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2571.
Regards,
Jeff
On Tue, Oct 13, 2009 at 3:19 PM, Amandeep Khurana ama...@gmail.com wrote:
You can put into Hbase. Or you can use the DBOutputFormat and interface
with
Thank you, all. It looks like SimpleDB may be good enough for my needs. The
forums claim that you can write to it from all reducers at once, being that
it is highly optimized for concurrent access.
On Tue, Oct 13, 2009 at 5:30 PM, Jeff Hammerbacher ham...@cloudera.comwrote:
Hey Mark,
You will
Hi all,
Given a map task, I need to know the IP address of the machine where
that task is running. Is there any existing method to get that
information?
Thank you,
Van
20 matches
Mail list logo