[Hadoop Wiki] Update of PoweredBy by SimoneLeo

2012-12-20 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The PoweredBy page has been changed by SimoneLeo:
http://wiki.apache.org/hadoop/PoweredBy?action=diffrev1=411rev2=412

Comment:
Updated CRS4 entry

* ''Generating web graphs on 100 nodes (dual 2.4GHz Xeon Processor, 2 GB 
RAM, 72GB Hard Drive) ''
  
   * ''[[http://www.crs4.it|CRS4]] ''
-   * ''[[http://dx.doi.org/10.1109/ICPPW.2009.37|Computational biology 
applications]] ''
-   * ''[[http://www.springerlink.com/content/np5u8k1x9l6u755g|HDFS as a VM 
repository for virtual clusters]] ''
+   * ''Hadoop deployed dynamically on subsets of a 400-node cluster ''
+* ''node: two quad-core 2.83GHz Xeons, 16 GB RAM, two 250GB HDDs ''
+   * ''Computational biology applications ''
  
   * ''[[http://crowdmedia.de/|crowdmedia]] ''
* ''Crowdmedia has a 5 Node Hadoop cluster for statistical analysis ''


svn commit: r1424459 - /hadoop/common/trunk/hadoop-project/src/site/site.xml

2012-12-20 Thread tucu
Author: tucu
Date: Thu Dec 20 13:41:43 2012
New Revision: 1424459

URL: http://svn.apache.org/viewvc?rev=1424459view=rev
Log:
HADOOP-8427. Convert Forrest docs to APT, incremental. (adi2 via tucu)

Modified:
hadoop/common/trunk/hadoop-project/src/site/site.xml

Modified: hadoop/common/trunk/hadoop-project/src/site/site.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-project/src/site/site.xml?rev=1424459r1=1424458r2=1424459view=diff
==
--- hadoop/common/trunk/hadoop-project/src/site/site.xml (original)
+++ hadoop/common/trunk/hadoop-project/src/site/site.xml Thu Dec 20 13:41:43 
2012
@@ -51,6 +51,8 @@
   item name=Single Node Setup 
href=hadoop-project-dist/hadoop-common/SingleCluster.html/
   item name=Cluster Setup 
href=hadoop-project-dist/hadoop-common/ClusterSetup.html/
   item name=CLI Mini Cluster 
href=hadoop-project-dist/hadoop-common/CLIMiniCluster.html/
+  item name=File System Shell 
href=hadoop-project-dist/hadoop-common/FileSystemShell.html/
+  item name=Hadoop Commands Reference 
href=hadoop-project-dist/hadoop-common/CommandsManual.html/
 /menu
 
 menu name=HDFS inherit=top




svn commit: r1424501 - /hadoop/common/branches/branch-2/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestResourceUsageEmulators.java

2012-12-20 Thread tgraves
Author: tgraves
Date: Thu Dec 20 14:52:24 2012
New Revision: 1424501

URL: http://svn.apache.org/viewvc?rev=1424501view=rev
Log:
MAPREDUCE-4895 Fix compilation failure of 
org.apache.hadoop.mapred.gridmix.TestResourceUsageEmulators (Dennis Y via 
tgraves)

Modified:

hadoop/common/branches/branch-2/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestResourceUsageEmulators.java

Modified: 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestResourceUsageEmulators.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestResourceUsageEmulators.java?rev=1424501r1=1424500r2=1424501view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestResourceUsageEmulators.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestResourceUsageEmulators.java
 Thu Dec 20 14:52:24 2012
@@ -32,7 +32,6 @@ import org.apache.hadoop.mapreduce.TaskT
 import org.apache.hadoop.mapreduce.server.tasktracker.TTConfig;
 import org.apache.hadoop.mapreduce.task.MapContextImpl;
 import org.apache.hadoop.mapreduce.util.ResourceCalculatorPlugin;
-import org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.ProcResourceValues;
 import org.apache.hadoop.tools.rumen.ResourceUsageMetrics;
 import org.apache.hadoop.mapred.DummyResourceCalculatorPlugin;
 import org.apache.hadoop.mapred.gridmix.LoadJob.ResourceUsageMatcherRunner;




svn commit: r1424546 - in /hadoop/common/branches/branch-1: CHANGES.txt src/mapred/org/apache/hadoop/mapred/JobTracker.java

2012-12-20 Thread tomwhite
Author: tomwhite
Date: Thu Dec 20 15:54:30 2012
New Revision: 1424546

URL: http://svn.apache.org/viewvc?rev=1424546view=rev
Log:
MAPREDUCE-4806. Some private methods in JobTracker.RecoveryManager are not used 
anymore after MAPREDUCE-3837. Contributed by Karthik Kambatla.

Modified:
hadoop/common/branches/branch-1/CHANGES.txt

hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapred/JobTracker.java

Modified: hadoop/common/branches/branch-1/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/CHANGES.txt?rev=1424546r1=1424545r2=1424546view=diff
==
--- hadoop/common/branches/branch-1/CHANGES.txt (original)
+++ hadoop/common/branches/branch-1/CHANGES.txt Thu Dec 20 15:54:30 2012
@@ -354,6 +354,9 @@ Release 1.2.0 - unreleased
 MAPREDUCE-4860. DelegationTokenRenewal attempts to renew token even after
 a job is removed. (kkambatl via tucu)
 
+MAPREDUCE-4806. Some private methods in JobTracker.RecoveryManager are not
+used anymore after MAPREDUCE-3837. (Karthik Kambatla via tomwhite)
+
 Release 1.1.2 - Unreleased
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapred/JobTracker.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapred/JobTracker.java?rev=1424546r1=1424545r2=1424546view=diff
==
--- 
hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapred/JobTracker.java
 (original)
+++ 
hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapred/JobTracker.java
 Thu Dec 20 15:54:30 2012
@@ -1309,233 +1309,6 @@ public class JobTracker implements MRCon
   }
   return ret;
 }
-
-private JobStatusChangeEvent updateJob(JobInProgress jip, 
-JobHistory.JobInfo job) {
-  // Change the job priority
-  String jobpriority = job.get(Keys.JOB_PRIORITY);
-  JobPriority priority = JobPriority.valueOf(jobpriority);
-  // It's important to update this via the jobtracker's api as it will 
-  // take care of updating the event listeners too
-  
-  try {
-setJobPriority(jip.getJobID(), priority);
-  } catch (IOException e) {
-// This will not happen. JobTracker can set jobPriority of any job
-// as mrOwner has the needed permissions.
-LOG.warn(Unexpected. JobTracker could not do SetJobPriority on 
- + jip.getJobID() + .  + e);
-  }
-
-  // Save the previous job status
-  JobStatus oldStatus = (JobStatus)jip.getStatus().clone();
-  
-  // Set the start/launch time only if there are recovered tasks
-  // Increment the job's restart count
-  jip.updateJobInfo(job.getLong(JobHistory.Keys.SUBMIT_TIME), 
-job.getLong(JobHistory.Keys.LAUNCH_TIME));
-
-  // Save the new job status
-  JobStatus newStatus = (JobStatus)jip.getStatus().clone();
-  
-  return new JobStatusChangeEvent(jip, EventType.START_TIME_CHANGED, 
oldStatus, 
-  newStatus);
-}
-
-private void updateTip(TaskInProgress tip, JobHistory.Task task) {
-  long startTime = task.getLong(Keys.START_TIME);
-  if (startTime != 0) {
-tip.setExecStartTime(startTime);
-  }
-  
-  long finishTime = task.getLong(Keys.FINISH_TIME);
-  // For failed tasks finish-time will be missing
-  if (finishTime != 0) {
-tip.setExecFinishTime(finishTime);
-  }
-  
-  String cause = task.get(Keys.TASK_ATTEMPT_ID);
-  if (cause.length()  0) {
-// This means that the this is a FAILED events
-TaskAttemptID id = TaskAttemptID.forName(cause);
-TaskStatus status = tip.getTaskStatus(id);
-synchronized (JobTracker.this) {
-  // This will add the tip failed event in the new log
-  tip.getJob().failedTask(tip, id, status.getDiagnosticInfo(), 
-  status.getPhase(), status.getRunState(), 
-  status.getTaskTracker());
-}
-  }
-}
-
-private void createTaskAttempt(JobInProgress job, 
-   TaskAttemptID attemptId, 
-   JobHistory.TaskAttempt attempt) 
-  throws UnknownHostException {
-  TaskID id = attemptId.getTaskID();
-  String type = attempt.get(Keys.TASK_TYPE);
-  TaskInProgress tip = job.getTaskInProgress(id);
-  
-  //I. Get the required info
-  TaskStatus taskStatus = null;
-  String trackerName = attempt.get(Keys.TRACKER_NAME);
-  String trackerHostName = 
-JobInProgress.convertTrackerNameToHostName(trackerName);
-  // recover the port information.
-  int port = 0; // default to 0
-  String hport = attempt.get(Keys.HTTP_PORT);
-  if (hport != 

[Hadoop Wiki] Update of PoweredBy by SimoneLeo

2012-12-20 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The PoweredBy page has been changed by SimoneLeo:
http://wiki.apache.org/hadoop/PoweredBy?action=diffrev1=412rev2=413

Comment:
Updated CRS4 entry

   * ''[[http://www.crs4.it|CRS4]] ''
* ''Hadoop deployed dynamically on subsets of a 400-node cluster ''
 * ''node: two quad-core 2.83GHz Xeons, 16 GB RAM, two 250GB HDDs ''
+* ''most deployments use our high-performance GPFS (3.8PB, 15GB/s random 
r/w) ''
* ''Computational biology applications ''
  
   * ''[[http://crowdmedia.de/|crowdmedia]] ''


svn commit: r1424566 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: ./ src/main/java/org/apache/hadoop/fs/shell/ src/test/java/org/apache/hadoop/fs/

2012-12-20 Thread bobby
Author: bobby
Date: Thu Dec 20 16:19:48 2012
New Revision: 1424566

URL: http://svn.apache.org/viewvc?rev=1424566view=rev
Log:
HADOOP-9105. FsShell -moveFromLocal erroneously fails (daryn via bobby)

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1424566r1=1424565r2=1424566view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Thu Dec 
20 16:19:48 2012
@@ -1224,6 +1224,8 @@ Release 0.23.6 - UNRELEASED
 HADOOP-9038. unit-tests for AllocatorPerContext.PathIterator (Ivan A.
 Veselovsky via bobby)
 
+HADOOP-9105. FsShell -moveFromLocal erroneously fails (daryn via bobby)
+
 Release 0.23.5 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java?rev=1424566r1=1424565r2=1424566view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java
 Thu Dec 20 16:19:48 2012
@@ -311,6 +311,7 @@ abstract public class Command extends Co
 if (recursive  item.stat.isDirectory()) {
   recursePath(item);
 }
+postProcessPath(item);
   } catch (IOException e) {
 displayError(e);
   }
@@ -330,6 +331,15 @@ abstract public class Command extends Co
   }
 
   /**
+   * Hook for commands to implement an operation to be applied on each
+   * path for the command after being processed successfully
+   * @param item a {@link PathData} object
+   * @throws IOException if anything goes wrong...
+   */
+  protected void postProcessPath(PathData item) throws IOException {
+  }
+
+  /**
*  Gets the directory listing for a path and invokes
*  {@link #processPaths(PathData, PathData...)}
*  @param item {@link PathData} for directory to recurse into

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java?rev=1424566r1=1424565r2=1424566view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
 Thu Dec 20 16:19:48 2012
@@ -24,6 +24,7 @@ import java.util.LinkedList;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.fs.PathExistsException;
 import org.apache.hadoop.fs.shell.CopyCommands.CopyFromLocal;
 
 /** Various commands for moving files */
@@ -49,7 +50,21 @@ class MoveCommands {
 
 @Override
 protected void processPath(PathData src, PathData target) throws 
IOException {
-  target.fs.moveFromLocalFile(src.path, target.path);
+  // unlike copy, don't merge existing dirs during move
+  if (target.exists  target.stat.isDirectory()) {
+throw new PathExistsException(target.toString());
+  }
+  super.processPath(src, target);
+}
+
+@Override
+protected void postProcessPath(PathData src) throws IOException {
+  if (!src.fs.delete(src.path, false)) {
+// we have no way to know the actual error...
+PathIOException e = new PathIOException(src.toString());
+e.setOperation(remove);
+throw e;
+  }
 }
   }
 
@@ -95,4 +110,4 @@ class MoveCommands {
   }
 }
   }
-}
\ No newline at end of file
+}

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java
URL: 

svn commit: r1424574 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: ./ src/main/java/org/apache/hadoop/fs/shell/ src/test/java/org/apache/hadoop/fs/

2012-12-20 Thread bobby
Author: bobby
Date: Thu Dec 20 16:30:20 2012
New Revision: 1424574

URL: http://svn.apache.org/viewvc?rev=1424574view=rev
Log:
svn merge -c 1424566 FIXES: HADOOP-9105. FsShell -moveFromLocal erroneously 
fails (daryn via bobby)

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1424574r1=1424573r2=1424574view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Thu Dec 20 16:30:20 2012
@@ -919,6 +919,8 @@ Release 0.23.6 - UNRELEASED
 HADOOP-9038. unit-tests for AllocatorPerContext.PathIterator (Ivan A.
 Veselovsky via bobby)
 
+HADOOP-9105. FsShell -moveFromLocal erroneously fails (daryn via bobby)
+
 Release 0.23.5 - UNRELEASED
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java?rev=1424574r1=1424573r2=1424574view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Command.java
 Thu Dec 20 16:30:20 2012
@@ -307,6 +307,7 @@ abstract public class Command extends Co
 if (recursive  item.stat.isDirectory()) {
   recursePath(item);
 }
+postProcessPath(item);
   } catch (IOException e) {
 displayError(e);
   }
@@ -326,6 +327,15 @@ abstract public class Command extends Co
   }
 
   /**
+   * Hook for commands to implement an operation to be applied on each
+   * path for the command after being processed successfully
+   * @param item a {@link PathData} object
+   * @throws IOException if anything goes wrong...
+   */
+  protected void postProcessPath(PathData item) throws IOException {
+  }
+
+  /**
*  Gets the directory listing for a path and invokes
*  {@link #processPaths(PathData, PathData...)}
*  @param item {@link PathData} for directory to recurse into

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java?rev=1424574r1=1424573r2=1424574view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/MoveCommands.java
 Thu Dec 20 16:30:20 2012
@@ -24,6 +24,7 @@ import java.util.LinkedList;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.fs.PathExistsException;
 import org.apache.hadoop.fs.shell.CopyCommands.CopyFromLocal;
 
 /** Various commands for moving files */
@@ -49,7 +50,21 @@ class MoveCommands {
 
 @Override
 protected void processPath(PathData src, PathData target) throws 
IOException {
-  target.fs.moveFromLocalFile(src.path, target.path);
+  // unlike copy, don't merge existing dirs during move
+  if (target.exists  target.stat.isDirectory()) {
+throw new PathExistsException(target.toString());
+  }
+  super.processPath(src, target);
+}
+
+@Override
+protected void postProcessPath(PathData src) throws IOException {
+  if (!src.fs.delete(src.path, false)) {
+// we have no way to know the actual error...
+PathIOException e = new PathIOException(src.toString());
+e.setOperation(remove);
+throw e;
+  }
 }
   }
 
@@ -95,4 +110,4 @@ class MoveCommands {
   }
 }
   }
-}
\ No 

svn commit: r1424698 - in /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common: CHANGES.txt src/main/java/org/apache/hadoop/security/UserGroupInformation.java src/test/java/org/apac

2012-12-20 Thread tgraves
Author: tgraves
Date: Thu Dec 20 20:50:32 2012
New Revision: 1424698

URL: http://svn.apache.org/viewvc?rev=1424698view=rev
Log:
HADOOP-8561. Introduce HADOOP_PROXY_USER for secure impersonation in child 
hadoop client processes (Yu Gao via tgraves)

Added:

hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestProxyUserFromEnv.java
  - copied unchanged from r1422429, 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestProxyUserFromEnv.java
Modified:

hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java

Modified: 
hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1424698r1=1424697r2=1424698view=diff
==
--- 
hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
 (original)
+++ 
hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
 Thu Dec 20 20:50:32 2012
@@ -15,6 +15,9 @@ Release 0.23.6 - UNRELEASED
 HADOOP-9108. Add a method to clear terminateCalled to ExitUtil for test 
 cases (Kihwal Lee via tgraves)
 
+HADOOP-8561. Introduce HADOOP_PROXY_USER for secure impersonation in 
+child hadoop client processes (Yu Gao via tgraves)
+
   OPTIMIZATIONS
 
   BUG FIXES

Modified: 
hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java?rev=1424698r1=1424697r2=1424698view=diff
==
--- 
hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 (original)
+++ 
hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 Thu Dec 20 20:50:32 2012
@@ -81,6 +81,7 @@ public class UserGroupInformation {
*/
   private static final float TICKET_RENEW_WINDOW = 0.80f;
   static final String HADOOP_USER_NAME = HADOOP_USER_NAME;
+  static final String HADOOP_PROXY_USER = HADOOP_PROXY_USER;
   
   /** 
* UgiMetrics maintains UGI activity statistics
@@ -502,12 +503,20 @@ public class UserGroupInformation {
   subject);
 }
 login.login();
-loginUser = new UserGroupInformation(subject);
-loginUser.setLogin(login);
-loginUser.setAuthenticationMethod(isSecurityEnabled() ?
-  AuthenticationMethod.KERBEROS :
-  AuthenticationMethod.SIMPLE);
-loginUser = new UserGroupInformation(login.getSubject());
+UserGroupInformation realUser = new UserGroupInformation(subject);
+realUser.setLogin(login);
+realUser.setAuthenticationMethod(isSecurityEnabled() ?
+ AuthenticationMethod.KERBEROS :
+ AuthenticationMethod.SIMPLE);
+realUser = new UserGroupInformation(login.getSubject());
+// If the HADOOP_PROXY_USER environment variable or property
+// is specified, create a proxy user as the logged in user.
+String proxyUser = System.getenv(HADOOP_PROXY_USER);
+if (proxyUser == null) {
+  proxyUser = System.getProperty(HADOOP_PROXY_USER);
+}
+loginUser = proxyUser == null ? realUser : createProxyUser(proxyUser, 
realUser);
+
 String fileLocation = System.getenv(HADOOP_TOKEN_FILE_LOCATION);
 if (fileLocation != null) {
   // load the token storage file and put all of the tokens into the




svn commit: r1424734 - in /hadoop/common/branches/branch-1: ./ src/hdfs/ src/hdfs/org/apache/hadoop/hdfs/ src/hdfs/org/apache/hadoop/hdfs/server/namenode/ src/test/org/apache/hadoop/hdfs/

2012-12-20 Thread suresh
Author: suresh
Date: Thu Dec 20 22:20:10 2012
New Revision: 1424734

URL: http://svn.apache.org/viewvc?rev=1424734view=rev
Log:
HDFS-4320. Add a separate configuration for namenode rpc address instead of 
using fs.default.name. Contributed by Mostafa Elhemali.

Modified:
hadoop/common/branches/branch-1/CHANGES.txt
hadoop/common/branches/branch-1/src/hdfs/hdfs-default.xml

hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/DFSConfigKeys.java

hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/server/namenode/NameNode.java

hadoop/common/branches/branch-1/src/test/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java

Modified: hadoop/common/branches/branch-1/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/CHANGES.txt?rev=1424734r1=1424733r2=1424734view=diff
==
--- hadoop/common/branches/branch-1/CHANGES.txt (original)
+++ hadoop/common/branches/branch-1/CHANGES.txt Thu Dec 20 22:20:10 2012
@@ -145,6 +145,9 @@ Release 1.2.0 - unreleased
 MAPREDUCE-4845. ClusterStatus.getMaxMemory() and getUsedMemory() exist in
 MR1 but not MR2. (Sandy Ryza via tomwhite)
 
+HDFS-4320. Add a separate configuration for namenode rpc address instead
+of using fs.default.name. (Mostafa Elhemali via suresh)
+
   OPTIMIZATIONS
 
 HDFS-2533. Backport: Remove needless synchronization on some FSDataSet

Modified: hadoop/common/branches/branch-1/src/hdfs/hdfs-default.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/hdfs/hdfs-default.xml?rev=1424734r1=1424733r2=1424734view=diff
==
--- hadoop/common/branches/branch-1/src/hdfs/hdfs-default.xml (original)
+++ hadoop/common/branches/branch-1/src/hdfs/hdfs-default.xml Thu Dec 20 
22:20:10 2012
@@ -16,6 +16,16 @@ creations/deletions), or all./descrip
 /property
 
 property
+  namedfs.namenode.rpc-address/name
+  value/value
+  description
+RPC address that handles all clients requests. If empty then we'll get the
+value from fs.default.name.
+The value of this property will take the form of hdfs://nn-host1:rpc-port.
+  /description
+/property
+
+property
   namedfs.secondary.http.address/name
   value0.0.0.0:50090/value
   description

Modified: 
hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/DFSConfigKeys.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/DFSConfigKeys.java?rev=1424734r1=1424733r2=1424734view=diff
==
--- 
hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/DFSConfigKeys.java
 (original)
+++ 
hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/DFSConfigKeys.java
 Thu Dec 20 22:20:10 2012
@@ -65,6 +65,7 @@ public class DFSConfigKeys extends Commo
   public static final int DFS_NAMENODE_HTTP_PORT_DEFAULT = 50070;
   public static final String  DFS_NAMENODE_HTTP_ADDRESS_KEY = 
dfs.namenode.http-address;
   public static final String  DFS_NAMENODE_HTTP_ADDRESS_DEFAULT = 0.0.0.0: + 
DFS_NAMENODE_HTTP_PORT_DEFAULT;
+  public static final String  DFS_NAMENODE_RPC_ADDRESS_KEY = 
dfs.namenode.rpc-address;
   public static final String  DFS_NAMENODE_SERVICE_RPC_ADDRESS_KEY = 
dfs.namenode.servicerpc-address;
   public static final String  DFS_NAMENODE_MAX_OBJECTS_KEY = 
dfs.namenode.max.objects;
   public static final longDFS_NAMENODE_MAX_OBJECTS_DEFAULT = 0;

Modified: 
hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/server/namenode/NameNode.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/server/namenode/NameNode.java?rev=1424734r1=1424733r2=1424734view=diff
==
--- 
hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 (original)
+++ 
hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 Thu Dec 20 22:20:10 2012
@@ -236,7 +236,11 @@ public class NameNode implements ClientP
   }
 
   public static InetSocketAddress getAddress(Configuration conf) {
-return getAddress(FileSystem.getDefaultUri(conf).toString());
+String addr = conf.get(DFSConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
+if (addr == null || addr.isEmpty()) {
+  return getAddress(FileSystem.getDefaultUri(conf).toString());
+}
+return getAddress(addr);
   }
 
   public static URI getUri(InetSocketAddress namenode) {

Modified: 
hadoop/common/branches/branch-1/src/test/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/test/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java?rev=1424734r1=1424733r2=1424734view=diff

svn commit: r1424737 - in /hadoop/common/branches/branch-1-win: ./ src/hdfs/ src/hdfs/org/apache/hadoop/hdfs/ src/hdfs/org/apache/hadoop/hdfs/server/namenode/ src/test/org/apache/hadoop/hdfs/

2012-12-20 Thread suresh
Author: suresh
Date: Thu Dec 20 22:32:23 2012
New Revision: 1424737

URL: http://svn.apache.org/viewvc?rev=1424737view=rev
Log:
HDFS-4320. Merge 1424734 from branch-1

Modified:
hadoop/common/branches/branch-1-win/CHANGES.branch-1-win.txt
hadoop/common/branches/branch-1-win/src/hdfs/hdfs-default.xml

hadoop/common/branches/branch-1-win/src/hdfs/org/apache/hadoop/hdfs/DFSConfigKeys.java

hadoop/common/branches/branch-1-win/src/hdfs/org/apache/hadoop/hdfs/server/namenode/NameNode.java

hadoop/common/branches/branch-1-win/src/test/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java

Modified: hadoop/common/branches/branch-1-win/CHANGES.branch-1-win.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1-win/CHANGES.branch-1-win.txt?rev=1424737r1=1424736r2=1424737view=diff
==
--- hadoop/common/branches/branch-1-win/CHANGES.branch-1-win.txt (original)
+++ hadoop/common/branches/branch-1-win/CHANGES.branch-1-win.txt Thu Dec 20 
22:32:23 2012
@@ -299,3 +299,6 @@ Branch-hadoop-1-win (branched from branc
 HDFS-3942. Backport HDFS-3495 and HDFS-4234: Update Balancer to support new
 NetworkTopology with NodeGroup and use generic code for choosing datanode
 in Balancer.  (Junping Du via szetszwo)
+
+HDFS-4320. Add a separate configuration for namenode rpc address instead
+of using fs.default.name. (Mostafa Elhemali via suresh)

Modified: hadoop/common/branches/branch-1-win/src/hdfs/hdfs-default.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1-win/src/hdfs/hdfs-default.xml?rev=1424737r1=1424736r2=1424737view=diff
==
--- hadoop/common/branches/branch-1-win/src/hdfs/hdfs-default.xml (original)
+++ hadoop/common/branches/branch-1-win/src/hdfs/hdfs-default.xml Thu Dec 20 
22:32:23 2012
@@ -16,6 +16,16 @@ creations/deletions), or all./descrip
 /property
 
 property
+  namedfs.namenode.rpc-address/name
+  value/value
+  description
+RPC address that handles all clients requests. If empty then we'll get the
+value from fs.default.name.
+The value of this property will take the form of hdfs://nn-host1:rpc-port.
+  /description
+/property
+
+property
   namedfs.secondary.http.address/name
   value0.0.0.0:50090/value
   description

Modified: 
hadoop/common/branches/branch-1-win/src/hdfs/org/apache/hadoop/hdfs/DFSConfigKeys.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1-win/src/hdfs/org/apache/hadoop/hdfs/DFSConfigKeys.java?rev=1424737r1=1424736r2=1424737view=diff
==
--- 
hadoop/common/branches/branch-1-win/src/hdfs/org/apache/hadoop/hdfs/DFSConfigKeys.java
 (original)
+++ 
hadoop/common/branches/branch-1-win/src/hdfs/org/apache/hadoop/hdfs/DFSConfigKeys.java
 Thu Dec 20 22:32:23 2012
@@ -50,6 +50,7 @@ public class DFSConfigKeys extends Commo
   public static final int DFS_NAMENODE_HTTP_PORT_DEFAULT = 50070;
   public static final String  DFS_NAMENODE_HTTP_ADDRESS_KEY = 
dfs.namenode.http-address;
   public static final String  DFS_NAMENODE_HTTP_ADDRESS_DEFAULT = 0.0.0.0: + 
DFS_NAMENODE_HTTP_PORT_DEFAULT;
+  public static final String  DFS_NAMENODE_RPC_ADDRESS_KEY = 
dfs.namenode.rpc-address;
   public static final String  DFS_NAMENODE_SERVICE_RPC_ADDRESS_KEY = 
dfs.namenode.servicerpc-address;
   public static final String  DFS_NAMENODE_MAX_OBJECTS_KEY = 
dfs.namenode.max.objects;
   public static final longDFS_NAMENODE_MAX_OBJECTS_DEFAULT = 0;

Modified: 
hadoop/common/branches/branch-1-win/src/hdfs/org/apache/hadoop/hdfs/server/namenode/NameNode.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1-win/src/hdfs/org/apache/hadoop/hdfs/server/namenode/NameNode.java?rev=1424737r1=1424736r2=1424737view=diff
==
--- 
hadoop/common/branches/branch-1-win/src/hdfs/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 (original)
+++ 
hadoop/common/branches/branch-1-win/src/hdfs/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 Thu Dec 20 22:32:23 2012
@@ -225,7 +225,11 @@ public class NameNode implements ClientP
   }
 
   public static InetSocketAddress getAddress(Configuration conf) {
-return getAddress(FileSystem.getDefaultUri(conf).toString());
+String addr = conf.get(DFSConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
+if (addr == null || addr.isEmpty()) {
+  return getAddress(FileSystem.getDefaultUri(conf).toString());
+}
+return getAddress(addr);
   }
 
   public static URI getUri(InetSocketAddress namenode) {

Modified: 
hadoop/common/branches/branch-1-win/src/test/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java
URL: 

svn commit: r1424816 - in /hadoop/common/branches/branch-1: CHANGES.txt src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java

2012-12-20 Thread acmurthy
Author: acmurthy
Date: Fri Dec 21 07:38:49 2012
New Revision: 1424816

URL: http://svn.apache.org/viewvc?rev=1424816view=rev
Log:
MAPREDUCE-4888. Fixed NLineInputFormat one-off error which dropped data. 
Contributed by Vinod K V.

Modified:
hadoop/common/branches/branch-1/CHANGES.txt

hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java

hadoop/common/branches/branch-1/src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java

Modified: hadoop/common/branches/branch-1/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/CHANGES.txt?rev=1424816r1=1424815r2=1424816view=diff
==
--- hadoop/common/branches/branch-1/CHANGES.txt (original)
+++ hadoop/common/branches/branch-1/CHANGES.txt Fri Dec 21 07:38:49 2012
@@ -411,6 +411,9 @@ Release 1.1.2 - Unreleased
 
 MAPREDUCE-4859. Fixed TestRecoveryManager. (acmurthy) 
 
+MAPREDUCE-4888. Fixed NLineInputFormat one-off error which dropped data.
+(vinodkv via acmurthy) 
+
 Release 1.1.1 - 2012.11.18
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java?rev=1424816r1=1424815r2=1424816view=diff
==
--- 
hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java
 (original)
+++ 
hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java
 Fri Dec 21 07:38:49 2012
@@ -97,25 +97,14 @@ public class NLineInputFormat extends Fi
   numLines++;
   length += num;
   if (numLines == N) {
-// NLineInputFormat uses LineRecordReader, which always reads (and
-// consumes) at least one character out of its upper split
-// boundary. So to make sure that each mapper gets N lines, we
-// move back the upper split limits of each split by one character 
-// here.
-if (begin == 0) {
-  splits.add(new FileSplit(fileName, begin, length - 1,
-new String[] {}));
-} else {
-  splits.add(new FileSplit(fileName, begin - 1, length,
-new String[] {}));
-}
+splits.add(createFileSplit(fileName, begin, length));
 begin += length;
 length = 0;
 numLines = 0;
   }
 }
 if (numLines != 0) {
-  splits.add(new FileSplit(fileName, begin, length, new String[]{}));
+  splits.add(createFileSplit(fileName, begin, length));
 }

   } finally {
@@ -127,6 +116,23 @@ public class NLineInputFormat extends Fi
 return splits.toArray(new FileSplit[splits.size()]);
   }
 
+  /**
+   * NLineInputFormat uses LineRecordReader, which always reads
+   * (and consumes) at least one character out of its upper split
+   * boundary. So to make sure that each mapper gets N lines, we
+   * move back the upper split limits of each split 
+   * by one character here.
+   * @param fileName  Path of file
+   * @param begin  the position of the first byte in the file to process
+   * @param length  number of bytes in InputSplit
+   * @return  FileSplit
+   */
+  protected static FileSplit createFileSplit(Path fileName, long begin, long 
length) {
+return (begin == 0) 
+? new FileSplit(fileName, begin, length - 1, new String[] {})
+: new FileSplit(fileName, begin - 1, length, new String[] {});
+  }
+
   public void configure(JobConf conf) {
 N = conf.getInt(mapred.line.input.format.linespermap, 1);
   }

Modified: 
hadoop/common/branches/branch-1/src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java?rev=1424816r1=1424815r2=1424816view=diff
==
--- 
hadoop/common/branches/branch-1/src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java
 (original)
+++ 
hadoop/common/branches/branch-1/src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java
 Fri Dec 21 07:38:49 2012
@@ -48,9 +48,6 @@ public class TestLineInputFormat extends
 JobConf job = new JobConf();
 Path file = new Path(workDir, test.txt);
 
-int seed = new Random().nextInt();
-Random random = new Random(seed);
-
 localFs.delete(workDir, true);
 FileInputFormat.setInputPaths(job, workDir);
 int numLinesPerMap = 5;
@@ -58,7 +55,8 @@ public class TestLineInputFormat extends
 
 // for a variety of lengths
 for (int length = 0; length  MAX_LENGTH;
- length += random.nextInt(MAX_LENGTH/10) + 1) {
+ length += 1) 

svn commit: r1424817 - in /hadoop/common/branches/branch-1.1: CHANGES.txt src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java

2012-12-20 Thread acmurthy
Author: acmurthy
Date: Fri Dec 21 07:41:29 2012
New Revision: 1424817

URL: http://svn.apache.org/viewvc?rev=1424817view=rev
Log:
Merge -c 1424815 from branch-1 to branch-1.1 to fix MAPREDUCE-4888. Fixed 
NLineInputFormat one-off error which dropped data. Contributed by Vinod K V.

Modified:
hadoop/common/branches/branch-1.1/CHANGES.txt

hadoop/common/branches/branch-1.1/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java

hadoop/common/branches/branch-1.1/src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java

Modified: hadoop/common/branches/branch-1.1/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1.1/CHANGES.txt?rev=1424817r1=1424816r2=1424817view=diff
==
--- hadoop/common/branches/branch-1.1/CHANGES.txt (original)
+++ hadoop/common/branches/branch-1.1/CHANGES.txt Fri Dec 21 07:41:29 2012
@@ -65,6 +65,9 @@ Release 1.1.2 - 2012.12.07
 
 MAPREDUCE-4859. Fixed TestRecoveryManager. (acmurthy) 
 
+MAPREDUCE-4888. Fixed NLineInputFormat one-off error which dropped data.
+(vinodkv via acmurthy) 
+
 Release 1.1.1 - 2012.11.18
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-1.1/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1.1/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java?rev=1424817r1=1424816r2=1424817view=diff
==
--- 
hadoop/common/branches/branch-1.1/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java
 (original)
+++ 
hadoop/common/branches/branch-1.1/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java
 Fri Dec 21 07:41:29 2012
@@ -97,25 +97,14 @@ public class NLineInputFormat extends Fi
   numLines++;
   length += num;
   if (numLines == N) {
-// NLineInputFormat uses LineRecordReader, which always reads (and
-// consumes) at least one character out of its upper split
-// boundary. So to make sure that each mapper gets N lines, we
-// move back the upper split limits of each split by one character 
-// here.
-if (begin == 0) {
-  splits.add(new FileSplit(fileName, begin, length - 1,
-new String[] {}));
-} else {
-  splits.add(new FileSplit(fileName, begin - 1, length,
-new String[] {}));
-}
+splits.add(createFileSplit(fileName, begin, length));
 begin += length;
 length = 0;
 numLines = 0;
   }
 }
 if (numLines != 0) {
-  splits.add(new FileSplit(fileName, begin, length, new String[]{}));
+  splits.add(createFileSplit(fileName, begin, length));
 }

   } finally {
@@ -127,6 +116,23 @@ public class NLineInputFormat extends Fi
 return splits.toArray(new FileSplit[splits.size()]);
   }
 
+  /**
+   * NLineInputFormat uses LineRecordReader, which always reads
+   * (and consumes) at least one character out of its upper split
+   * boundary. So to make sure that each mapper gets N lines, we
+   * move back the upper split limits of each split 
+   * by one character here.
+   * @param fileName  Path of file
+   * @param begin  the position of the first byte in the file to process
+   * @param length  number of bytes in InputSplit
+   * @return  FileSplit
+   */
+  protected static FileSplit createFileSplit(Path fileName, long begin, long 
length) {
+return (begin == 0) 
+? new FileSplit(fileName, begin, length - 1, new String[] {})
+: new FileSplit(fileName, begin - 1, length, new String[] {});
+  }
+
   public void configure(JobConf conf) {
 N = conf.getInt(mapred.line.input.format.linespermap, 1);
   }

Modified: 
hadoop/common/branches/branch-1.1/src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1.1/src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java?rev=1424817r1=1424816r2=1424817view=diff
==
--- 
hadoop/common/branches/branch-1.1/src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java
 (original)
+++ 
hadoop/common/branches/branch-1.1/src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java
 Fri Dec 21 07:41:29 2012
@@ -48,9 +48,6 @@ public class TestLineInputFormat extends
 JobConf job = new JobConf();
 Path file = new Path(workDir, test.txt);
 
-int seed = new Random().nextInt();
-Random random = new Random(seed);
-
 localFs.delete(workDir, true);
 FileInputFormat.setInputPaths(job, workDir);
 int numLinesPerMap = 5;
@@ -58,7 +55,8 @@ public class TestLineInputFormat extends
 
 // for a variety of lengths
 for (int length = 0; length  MAX_LENGTH;