Sandy, I have no idea about your issue :(
Zander,
Your problem is probably about this JIRA issue:
http://issues.apache.org/jira/browse/HADOOP-1212
Here is 2 workarounds explained:
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)#java.io.IOException:_Incompatibl
As far as I know, datanodes just know the block ID, and the namenode knows
which file this belongs to.
On Mon, Feb 16, 2009 at 4:54 PM, Amandeep Khurana wrote:
> Ok. Thanks..
>
> Another question now. Do the datanodes have any way of linking a particular
> block of data to a global file identifi
Where are your namenode and datanode storing the data? By default, it goes
into the /tmp directory. You might want to move that out of there.
Amandeep
Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz
On Mon, Feb 16, 2009 at 8:11 PM, Mark Kerzner wrote:
Hi all,
I consistently have this problem that I can run HDFS and restart it after
short breaks of a few hours, but the next day I always have to reformat HDFS
before the daemons begin to work.
Is that normal? Maybe this is treated as temporary data, and the results
need to be copied out of HDFS a
hi,
i am not seeing the DataNode run either. but i am seeing an extra process
TaskTracker run.
here is what hapens when i start the cluster run jps and stop the cluster...
had...@node0:/usr/local/hadoop$ bin/start-all.sh
starting namenode, logging to
/usr/local/hadoop/bin/../logs/hadoop-hadoop-
Ok. Thanks..
Another question now. Do the datanodes have any way of linking a particular
block of data to a global file identifier?
Amandeep
Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz
On Sun, Feb 15, 2009 at 9:37 PM, Matei Zaharia wrote:
> In gen
http://linuxproblem.org/art_9.html
On Tue, Feb 17, 2009 at 5:51 AM, zander1013 wrote:
>
> hi,
>
> i am going through the tutorial for a multi-node cluster by m. noll. i am to
> the section where i try to start the cluster but when i run bin/start-dfs.sh
> i get the error...
>
> had...@node1.local
hi,
i am going through the tutorial for a multi-node cluster by m. noll. i am to
the section where i try to start the cluster but when i run bin/start-dfs.sh
i get the error...
had...@node1.local's password: node1.local: Permission denied, please try
agian.
node1.local: Connection closed by 169.
> I didn't understand usage of "malicuous" here,
> but any process using HDFS api should first ask NameNode where the
Rasit,
Matei is referring to fact that a malicious peace of code can bypass the
Name Node and connect to any data node directly, or probe all data nodes for
that matter. There i
i resolved this issue by appending .local to the target hostname when i ssh.
for example my nodes are node0 and node1 thus i am successful when i ssh
node0.local and node1.local. outupt is as given in tutorial.
alonzo
Anum wrote:
>
> I got nearly same issue, cant able to ssh or telnet node an
I have the same problem.
is there any solution to this?
Thibaut
--
View this message in context:
http://www.nabble.com/AlredyBeingCreatedExceptions-after-upgrade-to-0.19.0-tp21631077p22043484.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
Hi Rasit,
Thanks for your response!
I saw the previous threads by Jerro and Mithila, but I think my problem is
slightly different. My datanodes are not being started, period. From a
previous thread:
"The common reasons for this case are configuration errors, installation
errors, or network conne
I got nearly same issue, cant able to ssh or telnet node and I doesnt think
provided link works for fedora.
On Mon, Feb 16, 2009 at 5:17 AM, Norbert Burger wrote:
> If you can't ssh directly to node1's IP address, then it seems you have a
> basic network configuration issue which is really out
Yes, I've tried the long solution;
when I execute ./hadoop dfs -put ... from a datanode,
in any case 1 copy gets written to that datanode.
But I think I should use SSH for this,
Anybody knows a better way?
Thanks,
Rasit
2009/2/16 Rasit OZDAS :
> Thanks, Jeff.
> After considering JIRA link you'
If you can't ssh directly to node1's IP address, then it seems you have a
basic network configuration issue which is really outside the scope of
Hadoop setup. In general, you should make sure that:
0) nodes are physically connected (use crossover cable if necessary)
1) your nodes are configured f
That's awkward , the site went down!
and ok, I note these points ,for future.
Thanks.
On 2/16/09, Steve Loughran wrote:
> Anum Ali wrote:
>> The parser problem is related to jar files , can be resolved not a bug.
>>
>> Forwarding link for its solution
>>
>>
>> http://www.jroller.com/navanee
Anum Ali wrote:
The parser problem is related to jar files , can be resolved not a bug.
Forwarding link for its solution
http://www.jroller.com/navanee/entry/unsupportedoperationexception_this_parser_does_not
this site is down; cant see it
It is a bug, because I view all operations proble
Hi All
I'm trying to create a tiny 2-node cluster (both on linux FC7) with Hadoop
0.19.0 - previously, I was able to install and run hadoop on a single node.
Now I'm trying it on 2 nodes - my idea was to put the name node and the job
tracker on separate nodes, and initially use these two as the da
Sandy, as far as I remember, there were some threads about the same
problem (I don't know if it's solved). Searching the mailing list for
this error: "could only be replicated to 0 nodes, instead of 1" may
help.
Cheers,
Rasit
2009/2/16 Sandy :
> just some more information:
> hadoop fsck produces:
Hi,
Although it's not MySQL; this might be of use:
http://svn.apache.org/repos/asf/hadoop/core/trunk/src/examples/org/apache/hadoop/examples/DBCountPageView.java
Fredrik
On Feb 16, 2009, at 8:33 AM, sandhiya wrote:
@Amandeep
Hi,
I'm new to Hadoop and am trying to run a simple database conn
There is no patch for the Capacity Scheduler for 0.18.x.
> -Original Message-
> From: Bill Au [mailto:bill.w...@gmail.com]
> Sent: Saturday, February 14, 2009 1:00 AM
> To: core-user@hadoop.apache.org
> Subject: capacity scheduler for 0.18.x?
>
> I see that there is a patch for the fai
Thanks, Jeff.
After considering JIRA link you've given and making some investigation:
It seems that this JIRA ticket didn't draw much attention, so will
take much time to be considered.
After some more investigation I found out that when I copy the file to
HDFS from a specific DataNode, first copy
22 matches
Mail list logo