[JIRA] (COLL-117) seeing warnings attempt to override final parameter: dfs.data.dir in tasktracker logs

2010-06-21 Thread Bikash Singhal

Hi Hadoopers,

I have received WARN in the hadoop cluster. Has anybody seen this . Any 
solution?


 2010-06-06 01:45:04,079 WARN org.apache.hadoop.conf.Configuration: 
 /var/lib/hadoop-0.20/cache/hadoop/mapred/local/taskTracker/jobcache/job_201006060025_0003/job.xml:a
  attempt to override final parameter: dfs.data.dir;  Ignoring.
 
 2010-06-06 01:49:41,501 WARN org.apache.hadoop.conf.Configuration: 
 /var/lib/hadoop-0.20/cache/hadoop/mapred/local/taskTracker/jobcache/job_201006060025_0004/job.xml:a
  attempt to override final parameter: dfs.data.dir;  Ignoring.


Legal Disclaimer: This electronic message and all contents contain information 
from Cybage Software Private Limited which may be privileged, confidential, or 
otherwise protected from disclosure. The information is intended to be for the 
addressee(s) only. If you are not an addressee, any disclosure, copy, 
distribution, or use of the contents of this message is strictly prohibited. If 
you have received this electronic message in error please notify the sender by 
reply e-mail to and destroy the original message and all copies. Cybage has 
taken every reasonable precaution to minimize the risk of malicious content in 
the mail, but is not liable for any damage you may sustain as a result of any 
malicious content in this e-mail. You should carry out your own malicious 
content checks before opening the e-mail or attachment. 
www.cybage.com



[jira] Created: (HADOOP-6832) Provide a web server plugin that uses a static user for the web UI

2010-06-21 Thread Owen O'Malley (JIRA)
Provide a web server plugin that uses a static user for the web UI
--

 Key: HADOOP-6832
 URL: https://issues.apache.org/jira/browse/HADOOP-6832
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.22.0


We need a simple plugin that uses a static user for clusters with security that 
don't want to authenticate users on the web UI.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: JIT debug symbols and OProfile under Hadoop

2010-06-21 Thread Shrinivas Joshi
Hi All,

I just wanted to check if anybody had a comment on this query.

Thanks,
-Shrinivas

On Wed, Jun 16, 2010 at 9:50 PM, Shrinivas Joshi jshrini...@gmail.comwrote:

 Sorry if this is repeat email for you, I did send this to common-dev list
 as well.

 Hello,
 I am trying to get profiles for a workload running on top of Hadoop 0.20.2
 framework. The workload jars and Hadoop jars have been compiled with debug
 symbols enabled. I could see local variable tables and line number tables in
 the jar class files using javap. I am passing the right value for *
 agentpath* flag pointing to oprofile JVMTI agent and I am also seeing it
 passed correctly to map and reduce JVMs from *ps *output. This is done
 through conf/mapred-site.xml properties. However, in the profiles I am
 unable to see meaningful samples for JITed code; everything is getting
 attributed to anon samples.

 Profiling a test app works just fine with the same JVM that is being used
 by Hadoop. I can see the right debug symbols being attributed in the
 resulting profiling. In fact, I even tried profiling a JVM process that was
 spawned using *ProcessBuilder* API, the way Hadoop spawns its map and
 reduce JVMs, that seem to work fine too.

 Does anybody have any pointers as to what I might be missing here? Has
 anybody successfully collected profiles showing JIT debug symbols for any
 Hadoop workload? I am using OProfile 0.9.6.

 Thanks,

 -Shrinivas



Re: hadoop-cluster-1.ic node errors and warnings

2010-06-21 Thread Allen Wittenauer

[Cutting the CC: line down to size ]

On Jun 18, 2010, at 3:37 AM, Bikash Singhal wrote:

 Hi folks ,
 
 I have received this error in the hadoop cluster. Has anybody anybody
 seen this . Any solution.


Since you aren't picking anything out and you've shared a bunch of messages, 
I'm going to go with the first one.  If you have a specific question about a 
specific message, it would be helpful to point it out.


 - namenode errors and warning --
 
 2010-06-17 18:35:19,970 WARN
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place
 enough replicas, still in need of 1

This isn't an error, but a warning.

IIRC, you can get this either if the datanodes are out of space or there are 
not have enough racks defined to meet the defined replication policy.  For 
example, requiring 10 replicas when only 3 racks are present will generate this 
warning.

Re: [JIRA] (COLL-117) seeing warnings attempt to override final parameter: dfs.data.dir in tasktracker logs

2010-06-21 Thread Allen Wittenauer

Again, removing a bunch of CC:'es.

On Jun 21, 2010, at 2:26 AM, Bikash Singhal wrote:

 
 Hi Hadoopers,
 
 I have received WARN in the hadoop cluster. Has anybody seen this . Any 
 solution?
 
 
 2010-06-06 01:45:04,079 WARN org.apache.hadoop.conf.Configuration: 
 /var/lib/hadoop-0.20/cache/hadoop/mapred/local/taskTracker/jobcache/job_201006060025_0003/job.xml:a
  attempt to override final parameter: dfs.data.dir;  Ignoring.
 
 2010-06-06 01:49:41,501 WARN org.apache.hadoop.conf.Configuration: 
 /var/lib/hadoop-0.20/cache/hadoop/mapred/local/taskTracker/jobcache/job_201006060025_0004/job.xml:a
  attempt to override final parameter: dfs.data.dir;  Ignoring.


It means your client has 'dfs.data.dir' in its hadoop config.  On the server 
side, it is marked as final, which means the client can't override the value.  
This message is completely harmless for this particular parameter.




[jira] Created: (HADOOP-6833) IPC leaks call parameters when exceptions thrown

2010-06-21 Thread Todd Lipcon (JIRA)
IPC leaks call parameters when exceptions thrown


 Key: HADOOP-6833
 URL: https://issues.apache.org/jira/browse/HADOOP-6833
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2, 0.21.0, 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Attachments: hadoop-6833.txt

HADOOP-6498 moved the calls.remove() call lower into the SUCCESS clause of 
receiveResponse(), but didn't put a similar calls.remove into the ERROR clause. 
So, any RPC call that throws an exception ends up orphaning the Call object in 
the connection's calls hashtable. This prevents cleanup of the connection and 
is a memory leak for the call parameters.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.