Re: HOD questions

2008-12-19 Thread Craig Macdonald
Hi Hemanth, While HOD does not do this automatically, please note that since you are bringing up a Map/Reduce cluster on the allocated nodes, you can submit map/reduce parameters with which to bring up the cluster when allocating jobs. The relevant options are

Re: Hit a roadbump in solving truncated block issue

2008-12-19 Thread Brian Bockelman
Hey Raghu, I never heard back from you about whether any of these fixes are ready to try out. Things are getting kind of bad here. Even at three replicas, I found one block which has all three replicas of length=0. Grepping through the logs, I get things like this: 2008-12-18

Re: Datanode handling of single disk failure

2008-12-19 Thread Konstantin Shvachko
Brian Bockelman wrote: Hello all, I'd like to take the datanode's capability to handle multiple directories to a somewhat-extreme, and get feedback on how well this might work. We have a few large RAID servers (12 to 48 disks) which we'd like to transition to Hadoop. I'd like to mount

Re: Datanode handling of single disk failure

2008-12-19 Thread Brian Bockelman
Thank you Konstantin, this information will be useful. Brian On Dec 19, 2008, at 12:37 PM, Konstantin Shvachko wrote: Brian Bockelman wrote: Hello all, I'd like to take the datanode's capability to handle multiple directories to a somewhat-extreme, and get feedback on how well this

Re: Hit a roadbump in solving truncated block issue

2008-12-19 Thread Garhan Attebury
Actually we do have the namenode logs for the period Brian mentioned. In Brian's email, he shows the log entries on node191 corresponding to it storing the third (new) replica of the block in question. The namenode log from that period shows: 2008-12-12 08:53:02,637 INFO

Re: Failed to start TaskTracker server

2008-12-19 Thread Sagar Naik
Well u have some process which grabs this port and Hadoop is not able to bind the port By the time u check, there is a chance that socket connection has died but was occupied when hadoop processes was attempting Check all the processes running on the system Do any of the processes acquire

Architecture question.

2008-12-19 Thread aakash_j j_shah
Hello All,    I am designing an architecture which should support 10 million records storage capacity and 1 million updates / minute. Data persistancy is not that important as I will be purging this data every day.      I am familiar with memcache but not hadoop. It will be great if I can get

Re: Architecture question.

2008-12-19 Thread Edwin Gonzalez
How large are the records ? 1 mil updates / mil. . . . you mind sharing the complexity of the updates ? On Fri, Dec 19, 2008 at 8:05 PM aakash_j j_shah aakash_j_s...@yahoo.com wrote: Hello All,    I am designing an architecture which should support 10 million records storage capacity and

Re: Architecture question.

2008-12-19 Thread aakash_j j_shah
Hello Edwin,     Thanks for the answer. Records are very small usually key is about 64 bytes ( ascii ) and updates are for 10 integer values. So I would say that record size including key is about 104 bytes. Sid. --- On Fri, 12/19/08, Edwin Gonzalez gonza...@zenbe.com wrote: From: Edwin

Re: Failed to start TaskTracker server

2008-12-19 Thread Rico
Well the machines are all servers that probably running many services but I have no permission to change or modify other users' programs or settings. Is there any way to change 50060 to other port? Sagar Naik wrote: Well u have some process which grabs this port and Hadoop is not able to bind

Re: Failed to start TaskTracker server

2008-12-19 Thread Sagar Naik
- check hadoop-default.xml in here u will find all the ports used. Copy the xml-nodes from hadoop-default.xml to hadoop-site.xml. Change the port values in hadoop-site.xml and deploy it on datanodes . Rico wrote: Well the machines are all servers that probably running many services but I

Re:Re: Failed to start TaskTracker server

2008-12-19 Thread ascend1
I'll find and have a test. Thanks for your help! On 2008-12-20,Sagar Naik sn...@attributor.com wrote: - check hadoop-default.xml in here u will find all the ports used. Copy the xml-nodes from hadoop-default.xml to hadoop-site.xml. Change the port values in hadoop-site.xml and deploy it on