Hi All
I've having some trouble in using the eclipse plugin on a windows XP machine
to connect to the HDFS (hadoop 0.19.0) on a linux server - I'm getting the
error:null message, although the port number etc are correct. Can this be
related to the user information? I've set it to the hadoop user o
Thanx guys - I've a clearer picture now:-)
Cheers
Arijit
2009/2/26 souravm
> In 32 bit machine u r limited to 4 gb heap size at jvm level per machine
>
> - Original Message -
> From: Arijit Mukherjee
> To: core-user@hadoop.apache.org
> Sent: Wed Feb 25 21:2
APSIZE to 4GB, it thrown the
> exception refered in this thread. how can i make full use of my mem. thx.
>
> 2009/2/26 Arijit Mukherjee
>
> > I was getting similar errors too while running the mapreduce samples. I
> > fiddled with the hadoop-env.sh (where the HEAPSIZE is
. My machine's
> memory size is 16G. but when i set HADOOP_HEAPSIZE to 4GB, it thrown the
> exception refered in this thread. how can i make full use of my mem. thx.
>
> 2009/2/26 Arijit Mukherjee
>
> > I was getting similar errors too while running the mapreduce samples
I was getting similar errors too while running the mapreduce samples. I
fiddled with the hadoop-env.sh (where the HEAPSIZE is specified) and the
hadoop-site.xml files - and rectified it after some trial and error. But I
would like to know if there is a thumb rule for this. Right now, I've a core
du
ClassInternal(ClassLoader.java:320)
> blueberry: Could not find the main class:
> Could_not_reserve_enough_space_for_the_card_marking_array. Program will
> exit.
What might have been happening here?
Regards
Arijit
2009/2/16 Arijit Mukherjee
> Hi All
>
> I'm trying to create a tiny 2-node cluster (both on linux FC7
Hi All
I'm trying to create a tiny 2-node cluster (both on linux FC7) with Hadoop
0.19.0 - previously, I was able to install and run hadoop on a single node.
Now I'm trying it on 2 nodes - my idea was to put the name node and the job
tracker on separate nodes, and initially use these two as the da
One correction - the number 5 in the mail below is my estimation of the
number of nodes we might need. Can this be too small a cluster?
Arijit
Dr. Arijit Mukherjee
Principal Member of Technical Staff, Level-II
Connectiva Systems (I) Pvt. Ltd.
J-2, Block GP, Sector V, Salt Lake
Kolkata 700 091
ks on each node
be sufficient? Why would you need an external storage in a hadoop
cluster? How can I find out what other projects on hadoop are using?
Cheers
Arijit
Dr. Arijit Mukherjee
Principal Member of Technical Staff, Level-II
Connectiva Systems (I) Pvt. Ltd.
J-2, Block GP, Sector V, Salt La
That's a very good overview Paco - thanx for that. I might get back to
you with more queries about cascade etc. at some time - hope you
wouldn't mind.
Regards
Arijit
Dr. Arijit Mukherjee
Principal Member of Technical Staff, Level-II
Connectiva Systems (I) Pvt. Ltd.
J-2, Block GP, Sect
Thanx again Enis. I'll have a look at Pig and Hive.
Regards
Arijit
Dr. Arijit Mukherjee
Principal Member of Technical Staff, Level-II
Connectiva Systems (I) Pvt. Ltd.
J-2, Block GP, Sector V, Salt Lake
Kolkata 700 091, India
Phone: +91 (0)33 23577531/32 x 107
http://www.connectivasystem
Sent: Wednesday, September 24, 2008 2:57 PM
To: core-user@hadoop.apache.org
Subject: Re: Questions about Hadoop
Hi,
Arijit Mukherjee wrote:
> Hi
>
> We've been thinking of using Hadoop for a decision making system which
> will analyze telecom-related data from various s
o create a workflow like functionality with MapReduce?
Regards
Arijit
Dr. Arijit Mukherjee
Principal Member of Technical Staff, Level-II
Connectiva Systems (I) Pvt. Ltd.
J-2, Block GP, Sector V, Salt Lake
Kolkata 700 091, India
Phone: +91 (0)33 23577531/32 x 107
http://www.connectivasystems.com
Hi
Most likely, it's due to login permissions. Have you set up ssh for
accessing the nodes? This page might be helpful -
http://tinyurl.com/6lz6o3 - contains detailed explanation of the steps
you should follow.
Hope this helps
Cheers
Arijit
Dr. Arijit Mukherjee
Principal Member of Tech
ipse.
Something to do with the parameters on the advanced tab?
Arijit
Dr. Arijit Mukherjee
Principal Member of Technical Staff, Level-II
Connectiva Systems (I) Pvt. Ltd.
J-2, Block GP, Sector V, Salt Lake
Kolkata 700 091, India
Phone: +91 (0)33 23577531/32 x 107
http://www.connectivasystems.com
e 552 spam score (5.6) exceeded
threshold error everytime I replied to this message.
Dr. Arijit Mukherjee
Principal Member of Technical Staff, Level-II
Connectiva Systems (I) Pvt. Ltd.
J-2, Block GP, Sector V, Salt Lake
Kolkata 700 091, India
Phone: +91 (0)33 23577531/32 x 107
http://www.connectivas
raised
these questions to the plugin forum, but thought someone here may be
able to help as well.
Regards
Arijit
Dr. Arijit Mukherjee
Principal Member of Technical Staff, Level-II
Connectiva Systems (I) Pvt. Ltd.
J-2, Block GP, Sector V, Salt Lake
Kolkata 700 091, India
Phone: +91 (0)33 23577531/32 x 107
http://www.connectivasystems.com
p
localhost: Could not create the Java virtual machine.
Does the TaskTracker need more memory? The problem is if I increase the
heap size in HADOOP_OPTS, all of the other hadoop processes start
throwing the same error.
Can anyone point me to the right direction please?
Thanx in advance
Arijit
Dr
18 matches
Mail list logo