Hi:
You need to configure your nodes to ensure that node 1 can connect to node
2 without password.
On Tue, Sep 16, 2008 at 2:04 PM, souravm [EMAIL PROTECTED] wrote:
Hi All,
I'm facing a problem in configuring hdfs in a fully distributed way in Mac
OSX.
Here is the topology -
1. The
Hi,
I tried the way u suggested. I setup ssh without password. So now namenode can
connect to datanode without password - the start-dfs.sh script does not ask for
any password. However, even with this fix I still face the same problem.
Regards,
Sourav
- Original Message -
From: Mafish
Ok i've tried what you suggested and all sorts of combinations with no luck.
Then I went through the source of the Streaming lib. It looks like it
checks for the existence
of the combiner while it is building the jobconf i.e. before the job is
sent to the nodes.
It calls class.forName() on the
Paco NATHAN wrote:
We use an EC2 image onto which we install Java, Ant, Hadoop, etc. To
make it simple, pull those from S3 buckets. That provides a flexible
pattern for managing the frameworks involved, more so than needing to
re-do an EC2 image whenever you want to add a patch to Hadoop.
Given
Looks like you have to wait for HADOOP-3570 and use -libjars for the same.
Thanks
Amareshwari
Christian Ulrik Søttrup wrote:
Ok i've tried what you suggested and all sorts of combinations with no
luck.
Then I went through the source of the Streaming lib. It looks like it
checks for the
zheng daqi wrote:
Hello,
I got a problem when I compiled hadoop's pipes' example(under cygwin).
and I searched a lot, and find nobody have met the same problem.
could you please give me some suggestions, thanks very much.
BUILD FAILED
The Rough Cut of the book is now available from
http://oreilly.com/catalog/9780596521998/index.html. There are a few
chapters available already, at various stages of completion. I'd love
to hear any suggestions for improvements that you may have. You can
give feedback on the Safari website where
Thanks, Steve -
Another flexible approach to handling messages across firewalls,
between jt and worker nodes, etc., would be to place an APMQ message
broker on the jobtracker and another inside our local network. We're
experimenting with RabbitMQ for that.
On Tue, Sep 16, 2008 at 4:03 AM,
check the namenode's log in machine1 to see if your namenode started
successfully :)
On Tue, Sep 16, 2008 at 2:04 PM, souravm [EMAIL PROTECTED] wrote:
Hi All,
I'm facing a problem in configuring hdfs in a fully distributed way in Mac
OSX.
Here is the topology -
1. The namenode is in
Hi,
Tha namenode in machine 1 has started. I can see the following log. Is there a
specific way to provide the master name in masters file (in hadoop/conf) in
datanode ? I've currently specified
2008-09-16 07:23:46,321 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics
On Mon, Sep 15, 2008 at 8:23 AM, Brian Vargas [EMAIL PROTECTED] wrote:
A simple solution is to just load all of the small files into a sequence
file, and process the sequence file instead.
I use this approach too. I make SequenceFiles with
key= the file name (Text)
value= the contents of the
Today I've met a strange situation: during running of some MapReduce jobs all
the TaskTrackers in the cluster simply disappeared without any apparent reason,
but the JobTracker have remained alive like nothing have happened. Even it's
web interface was running showing zero capacity for maps and
I'm trying to use JavaSerialization for a series of MapReduce jobs, and when
it comes to reading a SequenceFile using SequenceFileInputFormat with
JavaSerialized objects, something breaks down.
I've added org.apache.hadoop.io.serializer.JavaSerialization to the
io.serializations property in my
*HeadlineDocument *in the code below is equivalent to *MyObject* - I forgot
to obfuscate that one... opps...
On Tue, Sep 16, 2008 at 11:46 AM, Jason Grey [EMAIL PROTECTED]wrote:
I'm trying to use JavaSerialization for a series of MapReduce jobs, and
when it comes to reading a SequenceFile
Hello,Steve Loughran
after I type it in on the command line,
seems configure is ok.
$ src/c++/pipes/configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
Hello,
A strange thing happened in my job. In reduce phase, one of the tasks
status shows 101.44% complete and runs till some 102% and successfully
finished back to 100%. Is this a right behavior?
I put a quick copy-paste of the web GUI reporting completion of the
tasks. (sorry for the bad
On Sep 16, 2008, at 12:26 PM, pvvpr wrote:
Hello,
A strange thing happened in my job. In reduce phase, one of the tasks
status shows 101.44% complete and runs till some 102% and successfully
finished back to 100%. Is this a right behavior?
Which version of Hadoop are you running; and are
The next Hadoop User Group (Bay Area) meeting is scheduled for
Wednesday, Sept 17th from 6 - 7:30 pm at Yahoo! Mission College, Santa
Clara, CA, Building 1, Training Rooms 34.
Agenda:
Update on Hadoop Camp at ApacheCon
Cloud Computing Testbed - Thomas Sandholm, HP
Katta on Hadoop - Stefan
Sorry to hijack the discussion, but I've been seeing the same behavior after
upgrading from 0.17.2 to 0.18.1-dev. I am using map output compression.
-- Stefan
From: Arun C Murthy [EMAIL PROTECTED]
Reply-To: core-user@hadoop.apache.org
Date: Tue, 16 Sep 2008 12:56:28 -0700
To:
Hi.
Any pointer on what could be the problem ?
Regards,
Sourav
From: souravm
Sent: Tuesday, September 16, 2008 1:07 AM
To: 'core-user@hadoop.apache.org'
Subject: Re: Need help in hdfs configuration fully distributed way in Mac OSX...
Hi,
I tried the way
Unfortunately I don't know of a solution to your problem, but I've been
experiencing the exact same issues while trying to implement a Protocol
Buffer serialization. Take a look:
https://issues.apache.org/jira/browse/HADOOP-3788
I hope this helps others to diagnose your problem.
Alex
On Wed,
Did you try using same config file (used on machine 2) on all the nodes?
You can make the configs you have to work, with more effort, I don't
think that is necessary.
Raghu.
souravm wrote:
Hi.
Any pointer on what could be the problem ?
Regards,
Sourav
I don't think anyone has ever tried running the pipes code under windows, so
there may well be portability problems. If you figure out solutions for the
problems, please file patches.
-- Owen
I am using 0.18.1-dev, upgraded from 0.18.0. I am also using compression for
map outputs.
- Prasad.
On Wednesday 17 September 2008 01:26:28 am Arun C Murthy wrote:
On Sep 16, 2008, at 12:26 PM, pvvpr wrote:
Hello,
A strange thing happened in my job. In reduce phase, one of the tasks
Prasad Pingali wrote:
I am using 0.18.1-dev, upgraded from 0.18.0. I am also using compression for
map outputs.
I think this is fixed in 19. Look here
https://issues.apache.org/jira/browse/HADOOP-3131. We see this with
compression turned ON.
Amar
- Prasad.
On Wednesday 17 September 2008
25 matches
Mail list logo