svn commit: r916290 - in /hadoop/common/trunk: ./ .eclipse.templates/ ivy/ src/java/org/apache/hadoop/security/token/ src/test/core/org/apache/hadoop/io/compress/ src/test/core/org/apache/hadoop/secur

2010-02-25 Thread omalley
Author: omalley
Date: Thu Feb 25 14:14:18 2010
New Revision: 916290

URL: http://svn.apache.org/viewvc?rev=916290&view=rev
Log:
HADOOP-6579. Provide a mechanism for encoding/decoding Tokens from
a url-safe string and change the commons-code library to 1.4. (omalley)

Modified:
hadoop/common/trunk/.eclipse.templates/.classpath
hadoop/common/trunk/CHANGES.txt
hadoop/common/trunk/ivy/hadoop-core-template.xml
hadoop/common/trunk/ivy/libraries.properties
hadoop/common/trunk/src/java/org/apache/hadoop/security/token/Token.java

hadoop/common/trunk/src/test/core/org/apache/hadoop/io/compress/TestCodec.java

hadoop/common/trunk/src/test/core/org/apache/hadoop/security/token/TestToken.java

Modified: hadoop/common/trunk/.eclipse.templates/.classpath
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/.eclipse.templates/.classpath?rev=916290&r1=916289&r2=916290&view=diff
==
--- hadoop/common/trunk/.eclipse.templates/.classpath (original)
+++ hadoop/common/trunk/.eclipse.templates/.classpath Thu Feb 25 14:14:18 2010
@@ -7,7 +7,7 @@



-   
+   




Modified: hadoop/common/trunk/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/CHANGES.txt?rev=916290&r1=916289&r2=916290&view=diff
==
--- hadoop/common/trunk/CHANGES.txt (original)
+++ hadoop/common/trunk/CHANGES.txt Thu Feb 25 14:14:18 2010
@@ -161,6 +161,9 @@
 HADOOP-6543. Allows secure clients to talk to unsecure clusters. 
 (Kan Zhang via ddas)
 
+HADOOP-6579. Provide a mechanism for encoding/decoding Tokens from
+a url-safe string and change the commons-code library to 1.4. (omalley)
+
   OPTIMIZATIONS
 
 HADOOP-6467. Improve the performance on HarFileSystem.listStatus(..).

Modified: hadoop/common/trunk/ivy/hadoop-core-template.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/ivy/hadoop-core-template.xml?rev=916290&r1=916289&r2=916290&view=diff
==
--- hadoop/common/trunk/ivy/hadoop-core-template.xml (original)
+++ hadoop/common/trunk/ivy/hadoop-core-template.xml Thu Feb 25 14:14:18 2010
@@ -41,7 +41,7 @@
 
   commons-codec
   commons-codec
-  1.3
+  1.4
 
 
   commons-net

Modified: hadoop/common/trunk/ivy/libraries.properties
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/ivy/libraries.properties?rev=916290&r1=916289&r2=916290&view=diff
==
--- hadoop/common/trunk/ivy/libraries.properties (original)
+++ hadoop/common/trunk/ivy/libraries.properties Thu Feb 25 14:14:18 2010
@@ -23,7 +23,7 @@
 
 commons-cli.version=1.2
 commons-cli2.version=2.0-mahout
-commons-codec.version=1.3
+commons-codec.version=1.4
 commons-collections.version=3.1
 commons-httpclient.version=3.0.1
 commons-lang.version=2.4

Modified: 
hadoop/common/trunk/src/java/org/apache/hadoop/security/token/Token.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/src/java/org/apache/hadoop/security/token/Token.java?rev=916290&r1=916289&r2=916290&view=diff
==
--- hadoop/common/trunk/src/java/org/apache/hadoop/security/token/Token.java 
(original)
+++ hadoop/common/trunk/src/java/org/apache/hadoop/security/token/Token.java 
Thu Feb 25 14:14:18 2010
@@ -21,9 +21,15 @@
 import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
+import java.util.Arrays;
 
+import org.apache.commons.codec.binary.Base64;
+
+import org.apache.hadoop.io.DataInputBuffer;
+import org.apache.hadoop.io.DataOutputBuffer;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparator;
 import org.apache.hadoop.io.WritableUtils;
 
 /**
@@ -47,7 +53,21 @@
 kind = id.getKind();
 service = new Text();
   }
-  
+ 
+  /**
+   * Construct a token from the components.
+   * @param identifier the token identifier
+   * @param password the token's password
+   * @param kind the kind of token
+   * @param service the service for this token
+   */
+  public Token(byte[] identifier, byte[] password, Text kind, Text service) {
+this.identifier = identifier;
+this.password = password;
+this.kind = kind;
+this.service = service;
+  }
+
   /**
* Default constructor
*/
@@ -123,4 +143,103 @@
 kind.write(out);
 service.write(out);
   }
+
+  /**
+   * Generate a string with the url-quoted base64 encoded serialized form
+   * of the Writable.
+   * @param obj the object to serialize
+   * @return the encoded string
+   * @throws IOException
+   */
+  private static String encodeWritable(Writable obj) throws IOException {
+DataOutputBuffer buf = new DataOutputBuffer();
+obj.write

svn commit: r916390 - in /hadoop/common/trunk: CHANGES.txt src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenIdentifier.java

2010-02-25 Thread omalley
Author: omalley
Date: Thu Feb 25 18:34:05 2010
New Revision: 916390

URL: http://svn.apache.org/viewvc?rev=916390&view=rev
Log:
HADOOP-6596. Add a version field to the AbstractDelegationTokenIdentifier's
serialized value. (omalley)

Modified:
hadoop/common/trunk/CHANGES.txt

hadoop/common/trunk/src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenIdentifier.java

Modified: hadoop/common/trunk/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/CHANGES.txt?rev=916390&r1=916389&r2=916390&view=diff
==
--- hadoop/common/trunk/CHANGES.txt (original)
+++ hadoop/common/trunk/CHANGES.txt Thu Feb 25 18:34:05 2010
@@ -164,6 +164,9 @@
 HADOOP-6579. Provide a mechanism for encoding/decoding Tokens from
 a url-safe string and change the commons-code library to 1.4. (omalley)
 
+HADOOP-6596. Add a version field to the AbstractDelegationTokenIdentifier's
+serialized value. (omalley)
+
   OPTIMIZATIONS
 
 HADOOP-6467. Improve the performance on HarFileSystem.listStatus(..).

Modified: 
hadoop/common/trunk/src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenIdentifier.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenIdentifier.java?rev=916390&r1=916389&r2=916390&view=diff
==
--- 
hadoop/common/trunk/src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenIdentifier.java
 (original)
+++ 
hadoop/common/trunk/src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenIdentifier.java
 Thu Feb 25 18:34:05 2010
@@ -34,6 +34,7 @@
 @InterfaceAudience.LimitedPrivate({HDFS, MAPREDUCE})
 public abstract class AbstractDelegationTokenIdentifier 
 extends TokenIdentifier {
+  private static final byte VERSION = 0;
 
   private Text owner;
   private Text renewer;
@@ -145,6 +146,11 @@
   }
   
   public void readFields(DataInput in) throws IOException {
+byte version = in.readByte();
+if (version != VERSION) {
+   throw new IOException("Unknown version of delegation token " + 
+  version);
+}
 owner.readFields(in);
 renewer.readFields(in);
 realUser.readFields(in);
@@ -155,6 +161,7 @@
   }
 
   public void write(DataOutput out) throws IOException {
+out.writeByte(VERSION);
 owner.write(out);
 renewer.write(out);
 realUser.write(out);




[Hadoop Wiki] Update of "PoweredBy" by voyager

2010-02-25 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "PoweredBy" page has been changed by voyager.
http://wiki.apache.org/hadoop/PoweredBy?action=diff&rev1=178&rev2=179

--

   * [[http://www.weblab.infosci.cornell.edu/|Cornell University Web Lab]]
* Generating web graphs on 100 nodes (dual 2.4GHz Xeon Processor, 2 GB RAM, 
72GB Hard Drive)
  
- 
- 
- 
   * [[http://www.deepdyve.com|Deepdyve]]
* Elastic cluster with 5-80 nodes
* We use hadoop to create our indexes of deep web content and to provide a 
high availability and high bandwidth storage service for index shards for our 
search cluster.
@@ -103, +100 @@

* We use Hadoop to filter and index our listings, removing exact duplicates 
and grouping similar ones.
* We plan to use Pig very shortly to produce statistics.
  
- 
   * [[http://blog.espol.edu.ec/hadoop/|ESPOL University (Escuela Superior 
Politécnica del Litoral) in Guayaquil, Ecuador]]
* 4 nodes proof-of-concept cluster.
* We use Hadoop in a Data-Intensive Computing capstone course. The course 
projects cover topics like information retrieval, machine learning, social 
network analysis, business intelligence, and network security.
@@ -117, +113 @@

* Facial similarity and recognition across large datasets.
* Image content based advertising and auto-tagging for social media.
* Image based video copyright protection.
- 
  
   * [[http://www.facebook.com/|Facebook]]
* We use Hadoop to store copies of internal log and dimension data sources 
and use it as a source for reporting/analytics and machine learning.
@@ -141, +136 @@

   * [[http://www.google.com|Google]]
* 
[[http://www.google.com/intl/en/press/pressrel/20071008_ibm_univ.html|University
 Initiative to Address Internet-Scale Computing Challenges]]
  
- 
- 
   * [[http://www.gruter.com|Gruter. Corp.]]
* 30 machine cluster  (4 cores, 1TB~2TB/machine storage)
* storage for blog data and web documents
@@ -226, +219 @@

  
   * [[http://www.lotame.com|Lotame]]
* Using Hadoop and Hbase for storage, log analysis, and pattern 
discovery/analysis.
+ 
+ 
+  * [[http://www.makara.com//|Makara]]
+   * Using ZooKeeper on 2 node cluster on VMware workstation, Amazon EC2, Zen
+   * Using zkpython
+   * Looking into expanding into 100 node cluster
+ 
  
   * [[http://www.crmcs.com//|MicroCode]]
* 18 node cluster (Quad-Core Intel Xeon, 1TB/node storage)


svn commit: r916467 - in /hadoop/common/trunk: ./ src/java/ src/java/org/apache/hadoop/conf/ src/java/org/apache/hadoop/fs/ src/java/org/apache/hadoop/http/ src/java/org/apache/hadoop/ipc/ src/java/or

2010-02-25 Thread ddas
Author: ddas
Date: Thu Feb 25 21:39:38 2010
New Revision: 916467

URL: http://svn.apache.org/viewvc?rev=916467&view=rev
Log:
HADOOP-6568. Adds authorization for the default servlets. Contributed by Vinod 
Kumar Vavilapalli.

Added:

hadoop/common/trunk/src/java/org/apache/hadoop/http/AdminAuthorizedServlet.java
Modified:
hadoop/common/trunk/CHANGES.txt
hadoop/common/trunk/src/java/core-default.xml
hadoop/common/trunk/src/java/org/apache/hadoop/conf/ConfServlet.java

hadoop/common/trunk/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
hadoop/common/trunk/src/java/org/apache/hadoop/http/HttpServer.java
hadoop/common/trunk/src/java/org/apache/hadoop/ipc/Server.java
hadoop/common/trunk/src/java/org/apache/hadoop/log/LogLevel.java
hadoop/common/trunk/src/java/org/apache/hadoop/metrics/MetricsServlet.java

hadoop/common/trunk/src/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java
hadoop/common/trunk/src/test/core/org/apache/hadoop/cli/CLITestHelper.java
hadoop/common/trunk/src/test/core/org/apache/hadoop/http/TestHttpServer.java
hadoop/common/trunk/src/test/core/org/apache/hadoop/ipc/TestRPC.java

Modified: hadoop/common/trunk/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/CHANGES.txt?rev=916467&r1=916466&r2=916467&view=diff
==
--- hadoop/common/trunk/CHANGES.txt (original)
+++ hadoop/common/trunk/CHANGES.txt Thu Feb 25 21:39:38 2010
@@ -62,6 +62,9 @@
 to change the default 1 MB, the maximum size when large IPC handler 
 response buffer is reset. (suresh)
 
+HADOOP-6568. Adds authorization for the default servlets. 
+(Vinod Kumar Vavilapalli via ddas)
+
   IMPROVEMENTS
 
 HADOOP-6283. Improve the exception messages thrown by

Modified: hadoop/common/trunk/src/java/core-default.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/src/java/core-default.xml?rev=916467&r1=916466&r2=916467&view=diff
==
--- hadoop/common/trunk/src/java/core-default.xml (original)
+++ hadoop/common/trunk/src/java/core-default.xml Thu Feb 25 21:39:38 2010
@@ -54,6 +54,16 @@
 
 
 
+  hadoop.cluster.administrators
+  Users and/or groups who are designated as the administrators of a
+  hadoop cluster. For specifying a list of users and groups the format to use
+  is "user1,user2 group1,group". If set to '*', it allows all users/groups to
+  do administrations operations of the cluster. If set to '', it allows none.
+  
+  ${user.name}
+
+
+
   hadoop.security.authorization
   false
   Is service-level authorization enabled?

Modified: hadoop/common/trunk/src/java/org/apache/hadoop/conf/ConfServlet.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/src/java/org/apache/hadoop/conf/ConfServlet.java?rev=916467&r1=916466&r2=916467&view=diff
==
--- hadoop/common/trunk/src/java/org/apache/hadoop/conf/ConfServlet.java 
(original)
+++ hadoop/common/trunk/src/java/org/apache/hadoop/conf/ConfServlet.java Thu 
Feb 25 21:39:38 2010
@@ -52,6 +52,13 @@
   @Override
   public void doGet(HttpServletRequest request, HttpServletResponse response)
   throws ServletException, IOException {
+
+// Do the authorization
+if (!HttpServer.hasAdministratorAccess(getServletContext(), request,
+response)) {
+  return;
+}
+
 String format = request.getParameter(FORMAT_PARAM);
 if (null == format) {
   format = FORMAT_XML;

Modified: 
hadoop/common/trunk/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java?rev=916467&r1=916466&r2=916467&view=diff
==
--- 
hadoop/common/trunk/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java 
(original)
+++ 
hadoop/common/trunk/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java 
Thu Feb 25 21:39:38 2010
@@ -133,5 +133,12 @@
   public static final String  HADOOP_SECURITY_GROUP_MAPPING = 
"hadoop.security.group.mapping";
   public static final String  HADOOP_SECURITY_GROUPS_CACHE_SECS = 
"hadoop.security.groups.cache.secs";
   public static final String  HADOOP_SECURITY_AUTHENTICATION = 
"hadoop.security.authentication";
+  public static final String HADOOP_SECURITY_AUTHORIZATION =
+  "hadoop.security.authorization";
+  /**
+   * ACL denoting the administrator ACLs for a hadoop cluster.
+   */
+  public final static String HADOOP_CLUSTER_ADMINISTRATORS_PROPERTY =
+  "hadoop.cluster.administrators";
 }
 

Added: 
hadoop/common/trunk/src/java/org/apache/hadoop/http/AdminAuthorizedServlet.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/src/java/org/apache/hadoop/http/AdminAuthorizedServlet.java?rev=916467&view=auto

svn commit: r916468 - in /hadoop/common/trunk: CHANGES.txt src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java src/test/core/org/apache/hadoop/security/token

2010-02-25 Thread shv
Author: shv
Date: Thu Feb 25 21:43:20 2010
New Revision: 916468

URL: http://svn.apache.org/viewvc?rev=916468&view=rev
Log:
HADOOP-6573. Support for persistent delegation tokens. Contributed by Jitendra 
Pandey.

Modified:
hadoop/common/trunk/CHANGES.txt

hadoop/common/trunk/src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java

hadoop/common/trunk/src/test/core/org/apache/hadoop/security/token/delegation/TestDelegationToken.java

Modified: hadoop/common/trunk/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/CHANGES.txt?rev=916468&r1=916467&r2=916468&view=diff
==
--- hadoop/common/trunk/CHANGES.txt (original)
+++ hadoop/common/trunk/CHANGES.txt Thu Feb 25 21:43:20 2010
@@ -170,6 +170,9 @@
 HADOOP-6596. Add a version field to the AbstractDelegationTokenIdentifier's
 serialized value. (omalley)
 
+HADOOP-6573. Support for persistent delegation tokens.
+(Jitendra Pandey via shv)
+
   OPTIMIZATIONS
 
 HADOOP-6467. Improve the performance on HarFileSystem.listStatus(..).

Modified: 
hadoop/common/trunk/src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java?rev=916468&r1=916467&r2=916468&view=diff
==
--- 
hadoop/common/trunk/src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
 (original)
+++ 
hadoop/common/trunk/src/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
 Thu Feb 25 21:43:20 2010
@@ -52,23 +52,30 @@
 
   /** 
* Cache of currently valid tokens, mapping from DelegationTokenIdentifier 
-   * to DelegationTokenInformation. Protected by its own lock.
+   * to DelegationTokenInformation. Protected by this object lock.
*/
-  private final Map currentTokens 
+  protected final Map currentTokens 
   = new HashMap();
   
   /**
-   * Sequence number to create DelegationTokenIdentifier
+   * Sequence number to create DelegationTokenIdentifier.
+   * Protected by this object lock.
*/
-  private int delegationTokenSequenceNumber = 0;
+  protected int delegationTokenSequenceNumber = 0;
   
-  private final Map allKeys 
+  /**
+   * Access to allKeys is protected by this object lock
+   */
+  protected final Map allKeys 
   = new HashMap();
   
   /**
-   * Access to currentId and currentKey is protected by this object lock.
+   * Access to currentId is protected by this object lock.
+   */
+  protected int currentId = 0;
+  /**
+   * Access to currentKey is protected by this object lock
*/
-  private int currentId = 0;
   private DelegationKey currentKey;
   
   private long keyUpdateInterval;
@@ -76,7 +83,7 @@
   private long tokenRemoverScanInterval;
   private long tokenRenewInterval;
   private Thread tokenRemoverThread;
-  private volatile boolean running;
+  protected volatile boolean running;
 
   public AbstractDelegationTokenSecretManager(long delegationKeyUpdateInterval,
   long delegationTokenMaxLifetime, long delegationTokenRenewInterval,
@@ -112,27 +119,50 @@
 return allKeys.values().toArray(new DelegationKey[0]);
   }
   
-  /** Update the current master key */
-  private synchronized void updateCurrentKey() throws IOException {
+  protected void logUpdateMasterKey(DelegationKey key) throws IOException {
+return;
+  }
+  
+  /** 
+   * Update the current master key 
+   * This is called once by startThreads before tokenRemoverThread is created, 
+   * and only by tokenRemoverThread afterwards.
+   */
+  private void updateCurrentKey() throws IOException {
 LOG.info("Updating the current master key for generating delegation 
tokens");
 /* Create a new currentKey with an estimated expiry date. */
-currentId++;
-currentKey = new DelegationKey(currentId, System.currentTimeMillis()
+int newCurrentId;
+synchronized (this) {
+  newCurrentId = currentId+1;
+}
+DelegationKey newKey = new DelegationKey(newCurrentId, System
+.currentTimeMillis()
 + keyUpdateInterval + tokenMaxLifetime, generateSecret());
-allKeys.put(currentKey.getKeyId(), currentKey);
+//Log must be invoked outside the lock on 'this'
+logUpdateMasterKey(newKey);
+synchronized (this) {
+  currentId = newKey.getKeyId();
+  currentKey = newKey;
+  allKeys.put(currentKey.getKeyId(), currentKey);
+}
   }
   
-  /** Update the current master key for generating delegation tokens */
-  public synchronized void rollMasterKey() throws IOException {
-removeExpiredKeys();
-/* set final expiry date for retiring currentKey */
-currentKey.setExpiryDate(System.currentTimeMillis() + tokenMaxLifetime);
-/*
- * current

svn commit: r916529 - in /hadoop/common/trunk: CHANGES.txt src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java src/java/org/apache/hadoop/ipc/Server.java

2010-02-25 Thread shv
Author: shv
Date: Fri Feb 26 01:37:57 2010
New Revision: 916529

URL: http://svn.apache.org/viewvc?rev=916529&view=rev
Log:
HADOOP-1849. Add undocumented configuration parameter for per handler call 
queue size in IPC Server. Contributed by Konstantin Shvachko.

Modified:
hadoop/common/trunk/CHANGES.txt

hadoop/common/trunk/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
hadoop/common/trunk/src/java/org/apache/hadoop/ipc/Server.java

Modified: hadoop/common/trunk/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/CHANGES.txt?rev=916529&r1=916528&r2=916529&view=diff
==
--- hadoop/common/trunk/CHANGES.txt (original)
+++ hadoop/common/trunk/CHANGES.txt Fri Feb 26 01:37:57 2010
@@ -1392,14 +1392,6 @@
 HADOOP-6218. Adds a feature where TFile can be split by Record
 Sequence number. (Hong Tang and Raghu Angadi via ddas)
 
-  IMPROVEMENTS
-
-HADOOP-5611. Fix C++ libraries to build on Debian Lenny. (Todd Lipcon
-via tomwhite)
-
-HADOOP-5612. Some c++ scripts are not chmodded before ant execution.
-(Todd Lipcon via tomwhite)
-
   BUG FIXES
 
 HADOOP-6231. Allow caching of filesystem instances to be disabled on a
@@ -1424,6 +1416,17 @@
 HADOOP-6498. IPC client bug may cause rpc call hang. (Ruyue Ma and
 hairong via hairong)
 
+  IMPROVEMENTS
+
+HADOOP-5611. Fix C++ libraries to build on Debian Lenny. (Todd Lipcon
+via tomwhite)
+
+HADOOP-5612. Some c++ scripts are not chmodded before ant execution.
+(Todd Lipcon via tomwhite)
+
+HADOOP-1849. Add undocumented configuration parameter for per handler 
+call queue size in IPC Server. (shv)
+
 Release 0.20.1 - 2009-09-01
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java?rev=916529&r1=916528&r2=916529&view=diff
==
--- 
hadoop/common/trunk/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java 
(original)
+++ 
hadoop/common/trunk/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java 
Fri Feb 26 01:37:57 2010
@@ -123,6 +123,15 @@
"ipc.server.max.response.size";
   public static final int IPC_SERVER_RPC_MAX_RESPONSE_SIZE_DEFAULT = 
 1024*1024;
+  /**
+   * How many calls per handler are allowed in the queue.
+   */
+  public static final String  IPC_SERVER_HANDLER_QUEUE_SIZE_KEY = 
+   "ipc.server.handler.queue.size";
+  /**
+   * The default number of calls per handler in the queue.
+   */
+  public static final int IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT = 100;
 
   public static final String  HADOOP_RPC_SOCKET_FACTORY_CLASS_DEFAULT_KEY = 

"hadoop.rpc.socket.factory.class.default";

Modified: hadoop/common/trunk/src/java/org/apache/hadoop/ipc/Server.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/src/java/org/apache/hadoop/ipc/Server.java?rev=916529&r1=916528&r2=916529&view=diff
==
--- hadoop/common/trunk/src/java/org/apache/hadoop/ipc/Server.java (original)
+++ hadoop/common/trunk/src/java/org/apache/hadoop/ipc/Server.java Fri Feb 26 
01:37:57 2010
@@ -60,8 +60,6 @@
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
-
-import static org.apache.hadoop.fs.CommonConfigurationKeys.*;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.io.WritableUtils;
 import org.apache.hadoop.ipc.metrics.RpcMetrics;
@@ -98,12 +96,7 @@
   // 3 : Introduce the protocol into the RPC connection header
   // 4 : Introduced SASL security layer
   public static final byte CURRENT_VERSION = 4;
-  
-  /**
-   * How many calls/handler are allowed in the queue.
-   */
-  private static final int MAX_QUEUE_SIZE_PER_HANDLER = 100;
-  
+
   /**
* Initial and max size of response buffer
*/
@@ -1288,9 +1281,12 @@
 this.paramClass = paramClass;
 this.handlerCount = handlerCount;
 this.socketSendBufferSize = 0;
-this.maxQueueSize = handlerCount * MAX_QUEUE_SIZE_PER_HANDLER;
-this.maxRespSize = conf.getInt(IPC_SERVER_RPC_MAX_RESPONSE_SIZE_KEY,
-   IPC_SERVER_RPC_MAX_RESPONSE_SIZE_DEFAULT);
+this.maxQueueSize = handlerCount * conf.getInt(
+CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY,
+CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT);
+this.maxRespSize = conf.getInt(
+CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_KEY,
+CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_DEFAULT);
 thi

svn commit: r916530 - in /hadoop/common/branches/branch-0.21: CHANGES.txt src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java src/java/org/apache/hadoop/ipc/Server.java

2010-02-25 Thread shv
Author: shv
Date: Fri Feb 26 01:45:38 2010
New Revision: 916530

URL: http://svn.apache.org/viewvc?rev=916530&view=rev
Log:
HADOOP-1849. Merge -r 916528:916529 from trunk to branch-0.21.

Modified:
hadoop/common/branches/branch-0.21/CHANGES.txt

hadoop/common/branches/branch-0.21/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java

hadoop/common/branches/branch-0.21/src/java/org/apache/hadoop/ipc/Server.java

Modified: hadoop/common/branches/branch-0.21/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.21/CHANGES.txt?rev=916530&r1=916529&r2=916530&view=diff
==
--- hadoop/common/branches/branch-0.21/CHANGES.txt (original)
+++ hadoop/common/branches/branch-0.21/CHANGES.txt Fri Feb 26 01:45:38 2010
@@ -1185,6 +1185,11 @@
 HADOOP-6498. IPC client bug may cause rpc call hang. (Ruyue Ma and
 hairong via hairong)
 
+  IMPROVEMENTS
+
+HADOOP-1849. Add undocumented configuration parameter for per handler 
+call queue size in IPC Server. (shv)
+
 Release 0.20.1 - 2009-09-01
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-0.21/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.21/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java?rev=916530&r1=916529&r2=916530&view=diff
==
--- 
hadoop/common/branches/branch-0.21/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
 (original)
+++ 
hadoop/common/branches/branch-0.21/src/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
 Fri Feb 26 01:45:38 2010
@@ -119,6 +119,16 @@
   public static final int IPC_CLIENT_IDLETHRESHOLD_DEFAULT = 4000;
   public static final String  IPC_SERVER_TCPNODELAY_KEY = 
"ipc.server.tcpnodelay";
   public static final boolean IPC_SERVER_TCPNODELAY_DEFAULT = false;
+  /**
+   * How many calls per handler are allowed in the queue.
+   */
+  public static final String  IPC_SERVER_HANDLER_QUEUE_SIZE_KEY = 
+   "ipc.server.handler.queue.size";
+  /**
+   * The default number of calls per handler in the queue.
+   */
+  public static final int IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT = 100;
+  
 
   public static final String  HADOOP_RPC_SOCKET_FACTORY_CLASS_DEFAULT_KEY = 

"hadoop.rpc.socket.factory.class.default";

Modified: 
hadoop/common/branches/branch-0.21/src/java/org/apache/hadoop/ipc/Server.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.21/src/java/org/apache/hadoop/ipc/Server.java?rev=916530&r1=916529&r2=916530&view=diff
==
--- 
hadoop/common/branches/branch-0.21/src/java/org/apache/hadoop/ipc/Server.java 
(original)
+++ 
hadoop/common/branches/branch-0.21/src/java/org/apache/hadoop/ipc/Server.java 
Fri Feb 26 01:45:38 2010
@@ -60,6 +60,7 @@
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.io.WritableUtils;
@@ -84,12 +85,7 @@
   // 1 : Introduce ping and server does not throw away RPCs
   // 3 : Introduce the protocol into the RPC connection header
   public static final byte CURRENT_VERSION = 3;
-  
-  /**
-   * How many calls/handler are allowed in the queue.
-   */
-  private static final int MAX_QUEUE_SIZE_PER_HANDLER = 100;
-  
+
   /**
* Initial and max size of response buffer
*/
@@ -1034,7 +1030,9 @@
 this.paramClass = paramClass;
 this.handlerCount = handlerCount;
 this.socketSendBufferSize = 0;
-this.maxQueueSize = handlerCount * MAX_QUEUE_SIZE_PER_HANDLER;
+this.maxQueueSize = handlerCount * conf.getInt(
+CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY,
+CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT);
 this.callQueue  = new LinkedBlockingQueue(maxQueueSize); 
 this.maxIdleTime = 2*conf.getInt("ipc.client.connection.maxidletime", 
1000);
 this.maxConnectionsToNuke = conf.getInt("ipc.client.kill.max", 10);




svn commit: r916531 - in /hadoop/common/branches/branch-0.20: CHANGES.txt src/core/org/apache/hadoop/ipc/Server.java

2010-02-25 Thread shv
Author: shv
Date: Fri Feb 26 01:46:19 2010
New Revision: 916531

URL: http://svn.apache.org/viewvc?rev=916531&view=rev
Log:
HADOOP-1849. Merge -r 916528:916529 from trunk to branch-0.20.

Modified:
hadoop/common/branches/branch-0.20/CHANGES.txt

hadoop/common/branches/branch-0.20/src/core/org/apache/hadoop/ipc/Server.java

Modified: hadoop/common/branches/branch-0.20/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20/CHANGES.txt?rev=916531&r1=916530&r2=916531&view=diff
==
--- hadoop/common/branches/branch-0.20/CHANGES.txt (original)
+++ hadoop/common/branches/branch-0.20/CHANGES.txt Fri Feb 26 01:46:19 2010
@@ -126,6 +126,9 @@
 HADOOP-5612. Some c++ scripts are not chmodded before ant execution.
 (Todd Lipcon via tomwhite)
 
+HADOOP-1849. Add undocumented configuration parameter for per handler 
+call queue size in IPC Server. (shv)
+
 Release 0.20.1 - 2009-09-01
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/branches/branch-0.20/src/core/org/apache/hadoop/ipc/Server.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20/src/core/org/apache/hadoop/ipc/Server.java?rev=916531&r1=916530&r2=916531&view=diff
==
--- 
hadoop/common/branches/branch-0.20/src/core/org/apache/hadoop/ipc/Server.java 
(original)
+++ 
hadoop/common/branches/branch-0.20/src/core/org/apache/hadoop/ipc/Server.java 
Fri Feb 26 01:46:19 2010
@@ -88,7 +88,9 @@
   /**
* How many calls/handler are allowed in the queue.
*/
-  private static final int MAX_QUEUE_SIZE_PER_HANDLER = 100;
+  private static final int IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT = 100;
+  private static final String  IPC_SERVER_HANDLER_QUEUE_SIZE_KEY = 
+"ipc.server.handler.queue.size";
   
   public static final Log LOG = LogFactory.getLog(Server.class);
 
@@ -1016,7 +1018,9 @@
 this.paramClass = paramClass;
 this.handlerCount = handlerCount;
 this.socketSendBufferSize = 0;
-this.maxQueueSize = handlerCount * MAX_QUEUE_SIZE_PER_HANDLER;
+this.maxQueueSize = handlerCount * conf.getInt(
+IPC_SERVER_HANDLER_QUEUE_SIZE_KEY,
+IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT);
 this.callQueue  = new LinkedBlockingQueue(maxQueueSize); 
 this.maxIdleTime = 2*conf.getInt("ipc.client.connection.maxidletime", 
1000);
 this.maxConnectionsToNuke = conf.getInt("ipc.client.kill.max", 10);




[Hadoop Wiki] Update of "Hive/PoweredBy" by ZhengShao

2010-02-25 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "Hive/PoweredBy" page has been changed by ZhengShao.
The comment on this change is: Adding Tao Bao per request from Mafish Liu.
http://wiki.apache.org/hadoop/Hive/PoweredBy?action=diff&rev1=16&rev2=17

--

  *  [[http://www.videoegg.com|VideoEgg]]
  We use Hive as the core database for our data warehouse where we track and 
analyze all the usage data of the ads across our network.
  
+ *  TaoBao (www dot taobao dot com)
+ We use Hive for data mining, internal log analysis and ad-hoc queries. We 
also do some extensively developing work on Hive.
+ 


svn commit: r916568 - in /hadoop/common/tags: release-0.20.2-rc1/ release-0.20.2-rc2/ release-0.20.2-rc3/

2010-02-25 Thread cdouglas
Author: cdouglas
Date: Fri Feb 26 05:12:08 2010
New Revision: 916568

URL: http://svn.apache.org/viewvc?rev=916568&view=rev
Log:
Remove old 0.20.2 release candidates

Removed:
hadoop/common/tags/release-0.20.2-rc1/
hadoop/common/tags/release-0.20.2-rc2/
hadoop/common/tags/release-0.20.2-rc3/



svn commit: r916569 - in /hadoop/common/tags: release-0.20.2-rc4/ release-0.20.2/

2010-02-25 Thread cdouglas
Author: cdouglas
Date: Fri Feb 26 05:13:14 2010
New Revision: 916569

URL: http://svn.apache.org/viewvc?rev=916569&view=rev
Log:
Hadoop 0.20.2 release

Added:
hadoop/common/tags/release-0.20.2/   (props changed)
  - copied from r916568, hadoop/common/tags/release-0.20.2-rc4/
Removed:
hadoop/common/tags/release-0.20.2-rc4/

Propchange: hadoop/common/tags/release-0.20.2/
--
--- svn:ignore (added)
+++ svn:ignore Fri Feb 26 05:13:14 2010
@@ -0,0 +1,7 @@
+build
+logs
+.classpath
+.project
+.settings
+
+.externalToolBuilders

Propchange: hadoop/common/tags/release-0.20.2/
--
--- svn:mergeinfo (added)
+++ svn:mergeinfo Fri Feb 26 05:13:14 2010
@@ -0,0 +1,3 @@
+/hadoop/common/trunk:910709
+/hadoop/core/branches/branch-0.19:713112
+/hadoop/core/trunk:727001,727117,727191,727212,727217,727228,727255,727869,728187,729052,729987,732385,732572,732613,732777,732838,732869,733887,734870,734916,736426,738328,738697,740077,740157,741703,741762,743745,743816,743892,744894,745180,746010,746206,746227,746233,746274,746338,746902-746903,746925,746944,746968,746970,747279,747289,747802,748084,748090,748783,749262,749318,749863,750533,752073,752609,752834,752836,752913,752932,753112-753113,753346,754645,754847,754927,755035,755226,755348,755370,755418,755426,755790,755905,755938,755960,755986,755998,756352,757448,757624,757849,758156,758180,759398,759932,760502,760783,761046,761482,761632,762216,762879,763107,763502,764967,765016,765809,765951,771607,771661,772844,772876,772884,772920,773889,776638,778962,778966,779893,781720,784661,785046,785569




[Hadoop Wiki] Update of "Hive/Roadmap" by NamitJain

2010-02-25 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "Hive/Roadmap" page has been changed by NamitJain.
http://wiki.apache.org/hadoop/Hive/Roadmap?action=diff&rev1=21&rev2=22

--

   * [[http://issues.apache.org/jira/browse/HIVE-870|semijoin]]
   * [[http://issues.apache.org/jira/browse/HIVE-655|UDTF]]
   * [[http://issues.apache.org/jira/browse/HIVE-31|Support for Create Table as 
Select (is available on trunk and versions later than 0.4.0)]]
+  * [[http://issues.apache.org/jira/browse/HIVE-931|Using sort and bucketing 
properties to optimize queries]]
+  * UNIQUE JOINS - that support a different semantics than the outer joins
+  * TypedBytes for user scripts
+  * [[Hive/ViewDev|Views]] for changing table names/columns without breaking 
existing queries [big]
+  * [[http://issues.apache.org/jira/browse/HIVE-917|Bucketed Map Join]]
  
  == Features working on now ==
   * Hive CLI improvement/Error messages:
@@ -23, +28 @@

* Bucketed Medium/Percentile
* GraphViz for graphing operator tree
* Multiple-partition inserts [big]
-   * [[Hive/ViewDev|Views]] for changing table names/columns without breaking 
existing queries [big]
* GenericUDTF
   * Performance
-   * TypedBytes for user scripts
+   * [[http://issues.apache.org/jira/browse/HIVE-1194|Sort Merge Join]]
   * Hive Freeway
* Allow Hive partition locations to be file/files.
  
@@ -37, +41 @@

  
  == More long-term Features (yet to be prioritized) ==
   * Support for Indexes
-  * UNIQUE JOINS - that support a different semantics than the outer joins
   * Support for Insert Appends
-  * Using sort and bucketing properties to optimize queries
   * Support for IN, exists and correlated subqueries
   * More native types - Enums, timestamp
   * Passing schema to scripts through an environment variable


[Hadoop Wiki] Update of "Hive/Roadmap" by NamitJain

2010-02-25 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "Hive/Roadmap" page has been changed by NamitJain.
http://wiki.apache.org/hadoop/Hive/Roadmap?action=diff&rev1=22&rev2=23

--

   * [[http://issues.apache.org/jira/browse/HIVE-31|Support for Create Table as 
Select (is available on trunk and versions later than 0.4.0)]]
   * [[http://issues.apache.org/jira/browse/HIVE-931|Using sort and bucketing 
properties to optimize queries]]
   * UNIQUE JOINS - that support a different semantics than the outer joins
-  * TypedBytes for user scripts
+  * [[http://issues.apache.org/jira/browse/HIVE-1023|TypedBytes for user 
scripts]]
   * [[Hive/ViewDev|Views]] for changing table names/columns without breaking 
existing queries [big]
   * [[http://issues.apache.org/jira/browse/HIVE-917|Bucketed Map Join]]
  


[Hadoop Wiki] Update of "Hive/Roadmap" by NamitJain

2010-02-25 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "Hive/Roadmap" page has been changed by NamitJain.
http://wiki.apache.org/hadoop/Hive/Roadmap?action=diff&rev1=23&rev2=24

--

   * ODBC driver [[Hive/HiveODBC]]
   * [[http://issues.apache.org/jira/browse/HIVE-870|semijoin]]
   * [[http://issues.apache.org/jira/browse/HIVE-655|UDTF]]
-  * [[http://issues.apache.org/jira/browse/HIVE-31|Support for Create Table as 
Select (is available on trunk and versions later than 0.4.0)]]
+  * [[http://issues.apache.org/jira/browse/HIVE-31|Create Table as Select]]
   * [[http://issues.apache.org/jira/browse/HIVE-931|Using sort and bucketing 
properties to optimize queries]]
-  * UNIQUE JOINS - that support a different semantics than the outer joins
+  * [[http://issues.apache.org/jira/browse/HIVE-591|UNIQUE JOINS - that 
support a different semantics than the outer joins]]
   * [[http://issues.apache.org/jira/browse/HIVE-1023|TypedBytes for user 
scripts]]
   * [[Hive/ViewDev|Views]] for changing table names/columns without breaking 
existing queries [big]
   * [[http://issues.apache.org/jira/browse/HIVE-917|Bucketed Map Join]]
+ 
  
  == Features working on now ==
   * Hive CLI improvement/Error messages:


[Hadoop Wiki] Update of "Hive/Roadmap" by NamitJain

2010-02-25 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "Hive/Roadmap" page has been changed by NamitJain.
http://wiki.apache.org/hadoop/Hive/Roadmap?action=diff&rev1=24&rev2=25

--

   * [[http://issues.apache.org/jira/browse/HIVE-1023|TypedBytes for user 
scripts]]
   * [[Hive/ViewDev|Views]] for changing table names/columns without breaking 
existing queries [big]
   * [[http://issues.apache.org/jira/browse/HIVE-917|Bucketed Map Join]]
+  * [[http://issues.apache.org/jira/browse/HIVE-74|Combine File Input Format]]
  
  
  == Features working on now ==