Hi Hemanth,
While HOD does not do this automatically, please note that since you
are bringing up a Map/Reduce cluster on the allocated nodes, you can
submit map/reduce parameters with which to bring up the cluster when
allocating jobs. The relevant options are
Hey Raghu,
I never heard back from you about whether any of these fixes are ready
to try out. Things are getting kind of bad here.
Even at three replicas, I found one block which has all three replicas
of length=0. Grepping through the logs, I get things like this:
2008-12-18
Brian Bockelman wrote:
Hello all,
I'd like to take the datanode's capability to handle multiple
directories to a somewhat-extreme, and get feedback on how well this
might work.
We have a few large RAID servers (12 to 48 disks) which we'd like to
transition to Hadoop. I'd like to mount
Thank you Konstantin, this information will be useful.
Brian
On Dec 19, 2008, at 12:37 PM, Konstantin Shvachko wrote:
Brian Bockelman wrote:
Hello all,
I'd like to take the datanode's capability to handle multiple
directories to a somewhat-extreme, and get feedback on how well
this
Actually we do have the namenode logs for the period Brian mentioned.
In Brian's email, he shows the log entries on node191 corresponding to
it storing the third (new) replica of the block in question. The
namenode log from that period shows:
2008-12-12 08:53:02,637 INFO
Well u have some process which grabs this port and Hadoop is not able to
bind the port
By the time u check, there is a chance that socket connection has died
but was occupied when hadoop processes was attempting
Check all the processes running on the system
Do any of the processes acquire
Hello All,
I am designing an architecture which should support 10 million records
storage capacity and 1 million updates / minute. Data persistancy is not that
important as I will be purging this data every day.
I am familiar with memcache but not hadoop. It will be great if I can
get
How large are the records ?
1 mil updates / mil. . . . you mind sharing the complexity of the updates ?
On Fri, Dec 19, 2008 at 8:05 PM aakash_j j_shah aakash_j_s...@yahoo.com
wrote:
Hello All,
I am designing an architecture which should support 10 million records
storage capacity and
Hello Edwin,
Thanks for the answer. Records are very small usually key is about 64 bytes (
ascii ) and updates are for 10 integer values. So I would say that record size
including key is about 104 bytes.
Sid.
--- On Fri, 12/19/08, Edwin Gonzalez gonza...@zenbe.com wrote:
From: Edwin
Well the machines are all servers that probably running many services
but I have no permission to change or modify other users' programs or
settings. Is there any way to change 50060 to other port?
Sagar Naik wrote:
Well u have some process which grabs this port and Hadoop is not able
to bind
- check hadoop-default.xml
in here u will find all the ports used. Copy the xml-nodes from
hadoop-default.xml to hadoop-site.xml. Change the port values in
hadoop-site.xml
and deploy it on datanodes .
Rico wrote:
Well the machines are all servers that probably running many services
but I
I'll find and have a test. Thanks for your help!
On 2008-12-20,Sagar Naik sn...@attributor.com wrote:
- check hadoop-default.xml
in here u will find all the ports used. Copy the xml-nodes from
hadoop-default.xml to hadoop-site.xml. Change the port values in
hadoop-site.xml
and deploy it on
12 matches
Mail list logo