Re: Adding nodes

2012-03-01 Thread George Datskos

Mohit,

New datanodes will connect to the namenode so thats how the namenode 
knows.  Just make sure the datanodes have the correct {fs.default.dir} 
in their hdfs-site.xml and then start them.  The namenode can, however, 
choose to reject the datanode if you are using the {dfs.hosts} and 
{dfs.hosts.exclude} settings in the namenode's hdfs-site.xml.


The namenode doesn't actually care about the slaves file.  It's only 
used by the start/stop scripts.



On 2012/03/02 10:35, Mohit Anchlia wrote:

I actually meant to ask how does namenode/jobtracker know there is a new
node in the cluster. Is it initiated by namenode when slave file is edited?
Or is it initiated by tasktracker when tasktracker is started?






Re: setting mapred.map.child.java.opts not working

2012-01-11 Thread George Datskos

Koji, Harsh

mapred-478 seems to be in v1, but those new settings have not yet been 
added to mapred-default.xml.  (for backwards compatibility?)



George

On 2012/01/12 13:50, Koji Noguchi wrote:

Hi Harsh,

Wasn't MAPREDUCE-478 in 1.0 ?  Maybe the Jira is not up to date.

Koji


On 1/11/12 8:44 PM, Harsh Jha...@cloudera.com  wrote:


These properties are not available on Apache Hadoop 1.0 (Formerly
known as 0.20.x). This was a feature introduced in 0.21
(https://issues.apache.org/jira/browse/MAPREDUCE-478), and is
available today on 0.22 and 0.23 line of releases.

For 1.0/0.20, use mapred.child.java.opts, that applies to both map
and reduce commonly.

Would also be helpful if you can tell us what doc guided you to use
these property names instead of the proper one, so we can fix it.

On Thu, Jan 12, 2012 at 8:44 AM, T Vinod Guptatvi...@readypulse.com  wrote:

Hi,
Can someone help me asap? when i run my mapred job, it fails with this
error -
12/01/12 02:58:36 INFO mapred.JobClient: Task Id :
attempt_201112151554_0050_m_71_0, Status : FAILED
Error: Java heap space
attempt_201112151554_0050_m_71_0: log4j:ERROR Failed to flush writer,
attempt_201112151554_0050_m_71_0: java.io.IOException: Stream closed
attempt_201112151554_0050_m_71_0:   at
sun.nio.cs.StreamEncoder.ensureOpen(StreamEncoder.java:44)
attempt_201112151554_0050_m_71_0:   at
sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:139)
attempt_201112151554_0050_m_71_0:   at
java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
attempt_201112151554_0050_m_71_0:   at
org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
attempt_201112151554_0050_m_71_0:   at
org.apache.hadoop.mapred.TaskLogAppender.flush(TaskLogAppender.java:94)
attempt_201112151554_0050_m_71_0:   at
org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:260)
attempt_201112151554_0050_m_71_0:   at
org.apache.hadoop.mapred.Child$2.run(Child.java:142)


so i updated my mapred-site.xml with these settings -

  property
namemapred.map.child.java.opts/name
value-Xmx2048M/value
  /property

  property
namemapred.reduce.child.java.opts/name
value-Xmx2048M/value
  /property

also, when i run my jar, i provide -
-Dmapred.map.child.java.opts=-Xmx4000m at the end.
inspite of this, the task is not getting the max heap size im setting.

where did i go wrong?

after changing mapred-site.xml, i restarted jobtracker and tasktracker.. is
that not good enough?

thanks







legacy hadoop versions

2011-12-19 Thread George Datskos
Is there an Apache Hadoop policy towards maintenance/support of older 
Hadoop versions?  It seems like 0.20.20* (now 1.0), 0.22, and 0.23 are 
the currently active branches.  Regarding versions like 0.18 and 0.19, 
is there some policy like up to N years or up to M releases prior 
where legacy versions are still maintained?



George




Re: Hadoop Question

2011-07-28 Thread George Datskos

Nitin,

On 2011/07/28 14:51, Nitin Khandelwal wrote:

How can I determine if a file is being written to (by any thread) in HDFS.
That information is exposed by the NameNode http servlet.  You can 
obtain it with the

fsck tool (hadoop fsck /path/to/dir -openforwrite) or you can do an http get

http://namenode:port/fsck?path=/your/pathopenforwrite=1


George