ashishgandhe wrote:
Hi Chandra,
Were you able to resolve this error? I'm facing the exact same issue.
hi..
yes was able to fix this.. it was a firewall issue... try disabling
firewall on all nodes in the cluster...
Thanks,
S.Chandravadana
Thanks,
Ashish
chandravadana
If use change the hostname, you must change /etc/hosts and
/etc/sysconfig/network
for example:
-bash-3.00$ more /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.102.205
Dear all,
I had configured all the nodes(master/slaves) with the correct hostname
and all the slaves can be reached with hostname from master, and vice versa.
But in my hadoop-site.xml file, if I configure master's
fs.default.name and mapred.job.tracker with hostname, e.g.
datacenter5:9000 and
http://people.apache.org/~edwardyoon/resume.html
On Fri, Oct 17, 2008 at 5:43 AM, JoshuaRec [EMAIL PROTECTED] wrote:
I posted about a position with a client of mine a few days ago, and got some
great responses, people who I think are qualified for the position. Of
course, the process takes a
What: Katta and a Case Study
When: November 10, 2008 6:30 PM
Location:
ContextWeb,
9th floor
22 Cortlandt Street
New York, NY 10007
Learn more here and RSVP:
http://www.meetup.com/Hadoop-NYC/calendar/8979383/
http://www.meetup.com/Hadoop-NYC/calendar/8979383/
Meetup
If I understand your problem correctly, one solution that worked for me
is to use the -libjars flag when launching your hadoop job:
bin/hadoop jar -libjars comma separated jars yourMainClass.jar
args...
I used this solution on my 5-slave cluster. I needed to have the third
party jar files to
Hi Steve!
I you can pass -jobconf mapred.map.tasks=$MAPPERS -jobconf
mapred.reduce.tasks=$REDUCERS
to the streaming job to set the number of reducers and mappers.
Regards Erik
On Wed, Oct 15, 2008 at 4:25 PM, Steve Gao [EMAIL PROTECTED] wrote:
Is there a way to change number of mappers in
That's right. I have success with the 0.18 (from EC2) and 0.18.1 (my
local installation) as well.
Kyle
On Tue, 2008-10-07 at 09:13 +0530, Amareshwari Sriramadasu wrote:
Hi,
From 0.19, the jars added using -libjars are available on the client
classpath also, fixed by HADOOP-3570.
We're trying to get all the patches available for end of next week.
Regards,
Jerome.
On 10/17/08 1:24 AM, Alex Loddengaard [EMAIL PROTECTED] wrote:
Thanks, Jerome. Any ETA on these patches and twiki updates?
I'm mostly interested in using Chukwa for log analysis. That is, I want to
get
On Oct 16, 2008, at 1:40 PM, Zhengguo 'Mike' SUN wrote:
I was trying to write an application using the pipes api. But it
seemed the serialization part is not working correctly. More
specifically, I can't deserialize a string from an StringInStream
constructed from context.getInputSplit().
Thanks Kyle, I tried libjars option but it didn't work. I tried on 0.18
version.
But I guess I had not set the classpath. So I will try again.
Anyways, putting jars in distributed cache solved my problem. But
-libjars options seems lot more useful and easy to use :)
thanks,
Taran
On Fri, Oct
Hi, Owen,
Did you mean that the example with a C++ record reader is not complete? I have
to run this example with the class file of that WordCountInputFormat.java.
Also, it seemed that the semantics of the C++ Pipes API is different from Java.
A InputSplit is a chunk of a file in Java, while
Hi all,
We've been running a pretty big job on 20 extra-large high-CPU EC2 servers
(Hadoop version 0.18, Java 1.6, the standard AMIs), and started getting the
dreaded Could not find any valid local directory error during the final
reduce phase.
I've confirmed that some of the boxes are running
Dear all,
We have in our Data Warehouse System, about 600 ETL( Extract Transform Load)
jobs to create interim data model. SOme jobs are dependent on completion of
others.
Assume that I create a group id intdependent jobs. Say a group G1 contains 100
jobs , G2 contains another 200 jobs which
Hi Ravion,
The problem you are describing sounds like a workflow where you must
be careful to verify certain conditions before proceeding to a next
step.
We have similar kinds of use cases for Hadoop apps at work, which are
essentially ETL. I recommend that you look at http://cascading.org as
15 matches
Mail list logo