[Hadoop Wiki] Update of HadoopIsNot by SteveLoughran

2012-10-26 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The HadoopIsNot page has been changed by SteveLoughran:
http://wiki.apache.org/hadoop/HadoopIsNot?action=diffrev1=8rev2=9

  
  Hadoop stores data in files, and does not index them. If you want to find 
something, you have to run a MapReduce job going through all the data. This 
takes time, and means that you cannot directly use Hadoop as a substitute for a 
database. Where Hadoop works is where the data is too big for a database (i.e. 
you have reached the technical limits, not just that you don't want to pay for 
a database license). With very large datasets, the cost of regenerating indexes 
is so high you can't easily index changing data. With many machines trying to 
write to the database, you can't get locks on it. Here the idea of 
vaguely-related files in a distributed filesystem can work.
  
- There is a project adding a column-table database on top of Hadoop - 
[[HBase]].
+ There is a high performance column-table database that runs on top of Hadoop 
HDFS: Apache [[HBase]]. This is a great place to keep the results extracted 
from your original data.
  
  == MapReduce is not always the best algorithm ==
  
@@ -49, +49 @@

  
  This is important. If you don't know these, you are out of your depth and 
should not start installing Hadoop until you have the basics of a couple of 
linux systems up and running, letting you ssh in to each of them without 
entering a password, know each other's hostname and such like. The Hadoop 
installation documents all assume you can do these things, and aren't going to 
bother explaining about them.
  
- == Hadoop Filesystem is not a substitute for a High Availability SAN-hosted 
FS ==
- 
- There are some very high-end filesystems out there: GPFS, Lustre, which offer 
fantastic data availability and performance, usually by requiring high end 
hardware (SAN and infiniband networking, RAID storage). Hadoop HDFS cheats, 
delivering high local data access rates by running code near the data, instead 
of being fast at shipping the data remotely. Instead of using RAID controllers, 
it uses non-RAIDed storage across multiple machines.
- 
- HDFS is not (currently) Highly Available. The Namenode is a [[SPOF]].  There 
is work underway to fix this short-coming.  However, there is no realistic time 
frame as to when that work will be available in a stable release.
- 
- Because of these limitations, if you want a  filesystem that is always 
available, HDFS is not yet there. You can run Hadoop MapReduce over other 
filesystems, however.
  
  == HDFS is not a POSIX filesystem ==
  
- The Posix filesystem model has files that can appended too, seek() calls 
made, files locked. Hadoop is only just adding (in July 2009) append() 
operations, and seek() operations throw away a lot of performance. You cannot 
seamlessly map code that assumes that all filesystems are Posix-compatible to 
HDFS.
+ The Posix filesystem model has files that can appended too, seek() calls 
made, files locked.You cannot seamlessly map code that assumes that all 
filesystems are Posix-compatible to HDFS.
  


svn commit: r1402605 - /hadoop/common/branches/HDFS-2802/

2012-10-26 Thread szetszwo
Author: szetszwo
Date: Fri Oct 26 18:17:03 2012
New Revision: 1402605

URL: http://svn.apache.org/viewvc?rev=1402605view=rev
Log:
Merge r1402274 through r1402603 from trunk.

Modified:
hadoop/common/branches/HDFS-2802/   (props changed)

Propchange: hadoop/common/branches/HDFS-2802/
--
  Merged /hadoop/common/trunk:r1402274-1402603




svn commit: r1402605 - in /hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common: CHANGES.txt src/main/docs/ src/main/java/ src/test/core/

2012-10-26 Thread szetszwo
Author: szetszwo
Date: Fri Oct 26 18:17:03 2012
New Revision: 1402605

URL: http://svn.apache.org/viewvc?rev=1402605view=rev
Log:
Merge r1402274 through r1402603 from trunk.

Modified:

hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt
   (props changed)

hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/docs/
   (props changed)

hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/
   (props changed)

hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/core/
   (props changed)

Propchange: 
hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/CHANGES.txt
--
  Merged 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt:r1402274-1402603

Propchange: 
hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/docs/
--
  Merged 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs:r1402274-1402603

Propchange: 
hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/
--
  Merged 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java:r1402274-1402603

Propchange: 
hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/test/core/
--
  Merged 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core:r1402274-1402603




svn commit: r1402639 - in /hadoop/common/branches/branch-1/src/core/org/apache/hadoop: conf/Configuration.java http/HttpServer.java

2012-10-26 Thread suresh
Author: suresh
Date: Fri Oct 26 19:56:39 2012
New Revision: 1402639

URL: http://svn.apache.org/viewvc?rev=1402639view=rev
Log:
HADOOP-8567. Port conf servlet to dump running configuration  to branch 1.x. 
Contributed by Jing Zhao.

Modified:

hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/Configuration.java

hadoop/common/branches/branch-1/src/core/org/apache/hadoop/http/HttpServer.java

Modified: 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/Configuration.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/Configuration.java?rev=1402639r1=1402638r2=1402639view=diff
==
--- 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/Configuration.java
 (original)
+++ 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/Configuration.java
 Fri Oct 26 19:56:39 2012
@@ -27,6 +27,7 @@ import java.io.IOException;
 import java.io.InputStream;
 import java.io.InputStreamReader;
 import java.io.OutputStream;
+import java.io.OutputStreamWriter;
 import java.io.Reader;
 import java.io.Writer;
 import java.net.URL;
@@ -51,6 +52,7 @@ import javax.xml.parsers.DocumentBuilder
 import javax.xml.parsers.DocumentBuilderFactory;
 import javax.xml.parsers.ParserConfigurationException;
 import javax.xml.transform.Transformer;
+import javax.xml.transform.TransformerException;
 import javax.xml.transform.TransformerFactory;
 import javax.xml.transform.dom.DOMSource;
 import javax.xml.transform.stream.StreamResult;
@@ -65,6 +67,7 @@ import org.apache.hadoop.util.Reflection
 import org.apache.hadoop.util.StringUtils;
 import org.codehaus.jackson.JsonFactory;
 import org.codehaus.jackson.JsonGenerator;
+import org.w3c.dom.Comment;
 import org.w3c.dom.DOMException;
 import org.w3c.dom.Document;
 import org.w3c.dom.Element;
@@ -171,10 +174,10 @@ public class Configuration implements It
 new CopyOnWriteArrayListString();
   
   /**
-   * Flag to indicate if the storage of resource which updates a key needs 
-   * to be stored for each key
+   * The value reported as the setting resource when a key is set
+   * by code rather than a file resource.
*/
-  private boolean storeResource;
+  static final String UNKNOWN_RESOURCE = Unknown;
   
   /**
* Stores the mapping of key to the resource which modifies or loads 
@@ -223,30 +226,13 @@ public class Configuration implements It
*/
   public Configuration(boolean loadDefaults) {
 this.loadDefaults = loadDefaults;
+updatingResource = new HashMapString, String();
 if (LOG.isDebugEnabled()) {
   LOG.debug(StringUtils.stringifyException(new IOException(config(;
 }
 synchronized(Configuration.class) {
   REGISTRY.put(this, null);
 }
-this.storeResource = false;
-  }
-  
-  /**
-   * A new configuration with the same settings and additional facility for
-   * storage of resource to each key which loads or updates 
-   * the key most recently
-   * @param other the configuration from which to clone settings
-   * @param storeResource flag to indicate if the storage of resource to 
-   * each key is to be stored
-   */
-  private Configuration(Configuration other, boolean storeResource) {
-this(other);
-this.loadDefaults = other.loadDefaults;
-this.storeResource = storeResource;
-if (storeResource) {
-  updatingResource = new HashMapString, String();
-}
   }
   
   /** 
@@ -260,20 +246,23 @@ public class Configuration implements It
   LOG.debug(StringUtils.stringifyException
 (new IOException(config(config;
 }
-   
-   this.resources = (ArrayList)other.resources.clone();
-   synchronized(other) {
- if (other.properties != null) {
-   this.properties = (Properties)other.properties.clone();
- }
-
- if (other.overlay!=null) {
-   this.overlay = (Properties)other.overlay.clone();
- }
-   }
-   
+
+this.resources = (ArrayList) other.resources.clone();
+synchronized (other) {
+  if (other.properties != null) {
+this.properties = (Properties) other.properties.clone();
+  }
+
+  if (other.overlay != null) {
+this.overlay = (Properties) other.overlay.clone();
+  }
+
+  this.updatingResource = new HashMapString, String(
+  other.updatingResource);
+}
+
 this.finalParameters = new HashSetString(other.finalParameters);
-synchronized(Configuration.class) {
+synchronized (Configuration.class) {
   REGISTRY.put(this, null);
 }
   }
@@ -437,6 +426,7 @@ public class Configuration implements It
   public void set(String name, String value) {
 getOverlay().setProperty(name, value);
 getProps().setProperty(name, value);
+this.updatingResource.put(name, UNKNOWN_RESOURCE);
   }
  
   /**
@@ -1071,10 +1061,8 @@ public class Configuration implements It
   loadResources(properties, resources, quietmode);
   if 

svn commit: r1402641 - /hadoop/common/branches/branch-1/CHANGES.txt

2012-10-26 Thread suresh
Author: suresh
Date: Fri Oct 26 19:58:29 2012
New Revision: 1402641

URL: http://svn.apache.org/viewvc?rev=1402641view=rev
Log:
HADOOP-8567. Adding CHANGES.txt change missed in the commit 1402639.

Modified:
hadoop/common/branches/branch-1/CHANGES.txt

Modified: hadoop/common/branches/branch-1/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/CHANGES.txt?rev=1402641r1=1402640r2=1402641view=diff
==
--- hadoop/common/branches/branch-1/CHANGES.txt (original)
+++ hadoop/common/branches/branch-1/CHANGES.txt Fri Oct 26 19:58:29 2012
@@ -29,6 +29,9 @@ Release 1.2.0 - unreleased
 HDFS-3912. Detect and avoid stale datanodes for writes.
 (Jing Zhao via suresh)
 
+HADOOP-8567. Port conf servlet to dump running configuration  to 
+branch 1.x. (todd, Backported by Jing Zhao via suresh)
+
   IMPROVEMENTS
 
 HDFS-3515. Port HDFS-1457 to branch-1. (eli)




svn commit: r1402660 - in /hadoop/common/trunk/hadoop-common-project/hadoop-common: CHANGES.txt src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java

2012-10-26 Thread tgraves
Author: tgraves
Date: Fri Oct 26 21:03:29 2012
New Revision: 1402660

URL: http://svn.apache.org/viewvc?rev=1402660view=rev
Log:
HADOOP-8713. TestRPCCompatibility fails intermittently with JDK7 Trevor 
Robinson via tgraves)

Modified:
hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java

Modified: hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1402660r1=1402659r2=1402660view=diff
==
--- hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt Fri Oct 
26 21:03:29 2012
@@ -383,6 +383,9 @@ Release 2.0.3-alpha - Unreleased 
 HADOOP-8951. RunJar to fail with user-comprehensible error 
 message if jar missing. (stevel via suresh)
 
+HADOOP-8713. TestRPCCompatibility fails intermittently with JDK7
+(Trevor Robinson via tgraves)
+
 Release 2.0.2-alpha - 2012-09-07 
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java?rev=1402660r1=1402659r2=1402660view=diff
==
--- 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java
 (original)
+++ 
hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java
 Fri Oct 26 21:03:29 2012
@@ -36,6 +36,7 @@ import org.apache.hadoop.ipc.protobuf.Pr
 import 
org.apache.hadoop.ipc.protobuf.ProtocolInfoProtos.ProtocolSignatureProto;
 import org.apache.hadoop.net.NetUtils;
 import org.junit.After;
+import org.junit.Before;
 import org.junit.Test;
 
 /** Unit test for supporting method-name based compatible RPCs. */
@@ -114,6 +115,11 @@ public class TestRPCCompatibility {
 }
 
   }
+
+  @Before
+  public void setUp() {
+ProtocolSignature.resetCache();
+  }
   
   @After
   public void tearDown() throws IOException {
@@ -219,7 +225,6 @@ System.out.println(echo int is NOT supp
   
   @Test // equal version client and server
   public void testVersion2ClientVersion2Server() throws Exception {
-ProtocolSignature.resetCache();
 // create a server with two handlers
 TestImpl2 impl = new TestImpl2();
 server = new RPC.Builder(conf).setProtocol(TestProtocol2.class)




svn commit: r1402662 - in /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common: CHANGES.txt src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java

2012-10-26 Thread tgraves
Author: tgraves
Date: Fri Oct 26 21:04:57 2012
New Revision: 1402662

URL: http://svn.apache.org/viewvc?rev=1402662view=rev
Log:
merge -r 1402659:1402660  from trunk. FIXES: HADOOP-8713

Modified:

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt

hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt?rev=1402662r1=1402661r2=1402662view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
(original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/CHANGES.txt 
Fri Oct 26 21:04:57 2012
@@ -113,6 +113,9 @@ Release 2.0.3-alpha - Unreleased 
 HADOOP-8951. RunJar to fail with user-comprehensible error 
 message if jar missing. (stevel via suresh)
 
+HADOOP-8713. TestRPCCompatibility fails intermittently with JDK7
+(Trevor Robinson via tgraves)
+
 Release 2.0.2-alpha - 2012-09-07 
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java?rev=1402662r1=1402661r2=1402662view=diff
==
--- 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java
 Fri Oct 26 21:04:57 2012
@@ -36,6 +36,7 @@ import org.apache.hadoop.ipc.protobuf.Pr
 import 
org.apache.hadoop.ipc.protobuf.ProtocolInfoProtos.ProtocolSignatureProto;
 import org.apache.hadoop.net.NetUtils;
 import org.junit.After;
+import org.junit.Before;
 import org.junit.Test;
 
 /** Unit test for supporting method-name based compatible RPCs. */
@@ -114,6 +115,11 @@ public class TestRPCCompatibility {
 }
 
   }
+
+  @Before
+  public void setUp() {
+ProtocolSignature.resetCache();
+  }
   
   @After
   public void tearDown() throws IOException {
@@ -219,7 +225,6 @@ System.out.println(echo int is NOT supp
   
   @Test // equal version client and server
   public void testVersion2ClientVersion2Server() throws Exception {
-ProtocolSignature.resetCache();
 // create a server with two handlers
 TestImpl2 impl = new TestImpl2();
 server = new RPC.Builder(conf).setProtocol(TestProtocol2.class)




svn commit: r1402678 - in /hadoop/common/branches/branch-1: CHANGES.txt src/core/org/apache/hadoop/conf/Configuration.java src/core/org/apache/hadoop/http/HttpServer.java

2012-10-26 Thread suresh
Author: suresh
Date: Fri Oct 26 21:44:29 2012
New Revision: 1402678

URL: http://svn.apache.org/viewvc?rev=1402678view=rev
Log:
HADOOP-8567. Reverting the commits r1402641 and r1402639 because of missed 
files.

Modified:
hadoop/common/branches/branch-1/CHANGES.txt

hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/Configuration.java

hadoop/common/branches/branch-1/src/core/org/apache/hadoop/http/HttpServer.java

Modified: hadoop/common/branches/branch-1/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/CHANGES.txt?rev=1402678r1=1402677r2=1402678view=diff
==
--- hadoop/common/branches/branch-1/CHANGES.txt (original)
+++ hadoop/common/branches/branch-1/CHANGES.txt Fri Oct 26 21:44:29 2012
@@ -29,9 +29,6 @@ Release 1.2.0 - unreleased
 HDFS-3912. Detect and avoid stale datanodes for writes.
 (Jing Zhao via suresh)
 
-HADOOP-8567. Port conf servlet to dump running configuration  to 
-branch 1.x. (todd, Backported by Jing Zhao via suresh)
-
   IMPROVEMENTS
 
 HDFS-3515. Port HDFS-1457 to branch-1. (eli)

Modified: 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/Configuration.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/Configuration.java?rev=1402678r1=1402677r2=1402678view=diff
==
--- 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/Configuration.java
 (original)
+++ 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/Configuration.java
 Fri Oct 26 21:44:29 2012
@@ -27,7 +27,6 @@ import java.io.IOException;
 import java.io.InputStream;
 import java.io.InputStreamReader;
 import java.io.OutputStream;
-import java.io.OutputStreamWriter;
 import java.io.Reader;
 import java.io.Writer;
 import java.net.URL;
@@ -52,7 +51,6 @@ import javax.xml.parsers.DocumentBuilder
 import javax.xml.parsers.DocumentBuilderFactory;
 import javax.xml.parsers.ParserConfigurationException;
 import javax.xml.transform.Transformer;
-import javax.xml.transform.TransformerException;
 import javax.xml.transform.TransformerFactory;
 import javax.xml.transform.dom.DOMSource;
 import javax.xml.transform.stream.StreamResult;
@@ -67,7 +65,6 @@ import org.apache.hadoop.util.Reflection
 import org.apache.hadoop.util.StringUtils;
 import org.codehaus.jackson.JsonFactory;
 import org.codehaus.jackson.JsonGenerator;
-import org.w3c.dom.Comment;
 import org.w3c.dom.DOMException;
 import org.w3c.dom.Document;
 import org.w3c.dom.Element;
@@ -174,10 +171,10 @@ public class Configuration implements It
 new CopyOnWriteArrayListString();
   
   /**
-   * The value reported as the setting resource when a key is set
-   * by code rather than a file resource.
+   * Flag to indicate if the storage of resource which updates a key needs 
+   * to be stored for each key
*/
-  static final String UNKNOWN_RESOURCE = Unknown;
+  private boolean storeResource;
   
   /**
* Stores the mapping of key to the resource which modifies or loads 
@@ -226,13 +223,30 @@ public class Configuration implements It
*/
   public Configuration(boolean loadDefaults) {
 this.loadDefaults = loadDefaults;
-updatingResource = new HashMapString, String();
 if (LOG.isDebugEnabled()) {
   LOG.debug(StringUtils.stringifyException(new IOException(config(;
 }
 synchronized(Configuration.class) {
   REGISTRY.put(this, null);
 }
+this.storeResource = false;
+  }
+  
+  /**
+   * A new configuration with the same settings and additional facility for
+   * storage of resource to each key which loads or updates 
+   * the key most recently
+   * @param other the configuration from which to clone settings
+   * @param storeResource flag to indicate if the storage of resource to 
+   * each key is to be stored
+   */
+  private Configuration(Configuration other, boolean storeResource) {
+this(other);
+this.loadDefaults = other.loadDefaults;
+this.storeResource = storeResource;
+if (storeResource) {
+  updatingResource = new HashMapString, String();
+}
   }
   
   /** 
@@ -246,23 +260,20 @@ public class Configuration implements It
   LOG.debug(StringUtils.stringifyException
 (new IOException(config(config;
 }
-
-this.resources = (ArrayList) other.resources.clone();
-synchronized (other) {
-  if (other.properties != null) {
-this.properties = (Properties) other.properties.clone();
-  }
-
-  if (other.overlay != null) {
-this.overlay = (Properties) other.overlay.clone();
-  }
-
-  this.updatingResource = new HashMapString, String(
-  other.updatingResource);
-}
-
+   
+   this.resources = (ArrayList)other.resources.clone();
+   synchronized(other) {
+ if (other.properties != null) {
+   this.properties = 

svn commit: r1402728 - in /hadoop/common/branches/branch-1: ./ src/core/org/apache/hadoop/conf/ src/core/org/apache/hadoop/http/ src/test/org/apache/hadoop/conf/

2012-10-26 Thread suresh
Author: suresh
Date: Sat Oct 27 00:38:46 2012
New Revision: 1402728

URL: http://svn.apache.org/viewvc?rev=1402728view=rev
Log:
HADOOP-8567. Port conf servlet to dump running configuration  to branch 1.x. 
Contributed by Jing Zhao.

Added:

hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/ConfServlet.java

hadoop/common/branches/branch-1/src/test/org/apache/hadoop/conf/TestConfServlet.java
Modified:
hadoop/common/branches/branch-1/CHANGES.txt

hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/Configuration.java

hadoop/common/branches/branch-1/src/core/org/apache/hadoop/http/HttpServer.java

Modified: hadoop/common/branches/branch-1/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/CHANGES.txt?rev=1402728r1=1402727r2=1402728view=diff
==
--- hadoop/common/branches/branch-1/CHANGES.txt (original)
+++ hadoop/common/branches/branch-1/CHANGES.txt Sat Oct 27 00:38:46 2012
@@ -104,6 +104,9 @@ Release 1.2.0 - unreleased
 HDFS-3062. Print logs outside the namesystem lock invalidateWorkForOneNode
 and computeReplicationWorkForBlock. (Jing Zhao via suresh)
 
+HADOOP-8567. Port conf servlet to dump running configuration to branch 1.x.
+(Jing Zhao via suresh)
+
   OPTIMIZATIONS
 
 HDFS-2533. Backport: Remove needless synchronization on some FSDataSet

Added: 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/ConfServlet.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/ConfServlet.java?rev=1402728view=auto
==
--- 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/ConfServlet.java
 (added)
+++ 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/conf/ConfServlet.java
 Sat Oct 27 00:38:46 2012
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.conf;
+
+import java.io.IOException;
+import java.io.Writer;
+
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.http.HttpServer;
+
+/**
+ * A servlet to print out the running configuration data.
+ */
+@InterfaceAudience.LimitedPrivate({HDFS, MapReduce})
+@InterfaceStability.Unstable
+public class ConfServlet extends HttpServlet {
+  private static final long serialVersionUID = 1L;
+
+  private static final String FORMAT_JSON = json;
+  private static final String FORMAT_XML = xml;
+  private static final String FORMAT_PARAM = format;
+
+  /**
+   * Return the Configuration of the daemon hosting this servlet.
+   * This is populated when the HttpServer starts.
+   */
+  private Configuration getConfFromContext() {
+Configuration conf = (Configuration)getServletContext().getAttribute(
+HttpServer.CONF_CONTEXT_ATTRIBUTE);
+assert conf != null;
+return conf;
+  }
+
+  @Override
+  public void doGet(HttpServletRequest request, HttpServletResponse response)
+  throws ServletException, IOException {
+String format = request.getParameter(FORMAT_PARAM);
+if (null == format) {
+  format = FORMAT_XML;
+}
+
+if (FORMAT_XML.equals(format)) {
+  response.setContentType(text/xml; charset=utf-8);
+} else if (FORMAT_JSON.equals(format)) {
+  response.setContentType(application/json; charset=utf-8);
+}
+
+Writer out = response.getWriter();
+try {
+  writeResponse(getConfFromContext(), out, format);
+} catch (BadFormatException bfe) {
+  response.sendError(HttpServletResponse.SC_BAD_REQUEST, bfe.getMessage());
+}
+out.close();
+  }
+
+  /**
+   * Guts of the servlet - extracted for easy testing.
+   */
+  static void writeResponse(Configuration conf, Writer out, String format)
+throws IOException, BadFormatException {
+if (FORMAT_JSON.equals(format)) {
+  Configuration.dumpConfiguration(conf, out);
+  

svn commit: r1402741 - in /hadoop/common/branches/branch-1: ./ src/core/ src/core/org/apache/hadoop/fs/ src/hdfs/org/apache/hadoop/hdfs/server/datanode/ src/mapred/org/apache/hadoop/mapred/ src/test/o

2012-10-26 Thread eli
Author: eli
Date: Sat Oct 27 04:30:41 2012
New Revision: 1402741

URL: http://svn.apache.org/viewvc?rev=1402741view=rev
Log:
HADOOP-8968. Add a flag to completely disable the worker version check. 
Contributed by Alejandro Abdelnur

Modified:
hadoop/common/branches/branch-1/CHANGES.txt
hadoop/common/branches/branch-1/src/core/core-default.xml

hadoop/common/branches/branch-1/src/core/org/apache/hadoop/fs/CommonConfigurationKeys.java

hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java

hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapred/TaskTracker.java

hadoop/common/branches/branch-1/src/test/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVersionCheck.java

hadoop/common/branches/branch-1/src/test/org/apache/hadoop/mapred/TestTaskTrackerVersionCheck.java

Modified: hadoop/common/branches/branch-1/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/CHANGES.txt?rev=1402741r1=1402740r2=1402741view=diff
==
--- hadoop/common/branches/branch-1/CHANGES.txt (original)
+++ hadoop/common/branches/branch-1/CHANGES.txt Sat Oct 27 04:30:41 2012
@@ -107,6 +107,9 @@ Release 1.2.0 - unreleased
 HADOOP-8567. Port conf servlet to dump running configuration to branch 1.x.
 (Jing Zhao via suresh)
 
+HADOOP-8968. Add a flag to completely disable the worker version check.
+(tucu via eli)
+
   OPTIMIZATIONS
 
 HDFS-2533. Backport: Remove needless synchronization on some FSDataSet

Modified: hadoop/common/branches/branch-1/src/core/core-default.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/core/core-default.xml?rev=1402741r1=1402740r2=1402741view=diff
==
--- hadoop/common/branches/branch-1/src/core/core-default.xml (original)
+++ hadoop/common/branches/branch-1/src/core/core-default.xml Sat Oct 27 
04:30:41 2012
@@ -578,6 +578,20 @@
 /property
 
 property
+  namehadoop.skip.worker.version.check/name
+  valuefalse/value
+  description
+By default datanodes refuse to connect to namenodes if their build
+revision (svn revision) do not match, and tasktrackers refuse to
+connect to jobtrackers if their build version (version, revision,
+user, and source checksum) do not match. This option changes the
+behavior of hadoop workers to skip doing a version check at all.
+This option supersedes the 'hadoop.relaxed.worker.version.check'
+option.
+  /description
+/property
+
+property
   namehadoop.jetty.logs.serve.aliases/name
   valuetrue/value
   description

Modified: 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/fs/CommonConfigurationKeys.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/core/org/apache/hadoop/fs/CommonConfigurationKeys.java?rev=1402741r1=1402740r2=1402741view=diff
==
--- 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/fs/CommonConfigurationKeys.java
 (original)
+++ 
hadoop/common/branches/branch-1/src/core/org/apache/hadoop/fs/CommonConfigurationKeys.java
 Sat Oct 27 04:30:41 2012
@@ -74,6 +74,11 @@ public class CommonConfigurationKeys {
   hadoop.relaxed.worker.version.check;
   public static final boolean HADOOP_RELAXED_VERSION_CHECK_DEFAULT = false;
 
+  /** See src/core/core-default.xml */
+  public static final String HADOOP_SKIP_VERSION_CHECK_KEY =
+  hadoop.skip.worker.version.check;
+  public static final boolean HADOOP_SKIP_VERSION_CHECK_DEFAULT = false;
+
   /** Enable/Disable aliases serving from jetty */
   public static final String HADOOP_JETTY_LOGS_SERVE_ALIASES =
 hadoop.jetty.logs.serve.aliases;

Modified: 
hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java?rev=1402741r1=1402740r2=1402741view=diff
==
--- 
hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 (original)
+++ 
hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 Sat Oct 27 04:30:41 2012
@@ -77,11 +77,9 @@ import org.apache.hadoop.hdfs.security.t
 import org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager;
 import 
org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.AccessMode;
 import org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
-import org.apache.hadoop.hdfs.server.common.GenerationStamp;
 import org.apache.hadoop.hdfs.server.common.HdfsConstants;
 import org.apache.hadoop.hdfs.server.common.HdfsConstants.StartupOption;
 import