hadoop git commit: HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. Contributed by Genmao Yu

2017-11-01 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/apache-trunk [created] 0b9eb4c86


HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. 
Contributed by Genmao Yu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0b9eb4c8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0b9eb4c8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0b9eb4c8

Branch: refs/heads/apache-trunk
Commit: 0b9eb4c86ebc6db2d0527647ba52960ea8d5d9fe
Parents: 0cc98ae
Author: Sammi Chen 
Authored: Thu Nov 2 14:26:16 2017 +0800
Committer: Sammi Chen 
Committed: Thu Nov 2 14:26:16 2017 +0800

--
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b9eb4c8/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
--
diff --git a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml 
b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
index 9711e52..73a9d41 100644
--- a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
+++ b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
@@ -105,6 +105,11 @@
 
 
   org.apache.hadoop
+  hadoop-aliyun
+  compile
+
+
+  org.apache.hadoop
   hadoop-aws
   compile
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. Contributed by Genmao Yu

2017-11-01 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/apache-3.0 [created] 4b01ae9c6


HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. 
Contributed by Genmao Yu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4b01ae9c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4b01ae9c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4b01ae9c

Branch: refs/heads/apache-3.0
Commit: 4b01ae9c676fe27ce340b0aa4e08e612b037ee43
Parents: be7cfe5
Author: Sammi Chen 
Authored: Thu Nov 2 14:33:22 2017 +0800
Committer: Sammi Chen 
Committed: Thu Nov 2 14:33:22 2017 +0800

--
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b01ae9c/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
--
diff --git a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml 
b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
index 4d69e94..46dadcf 100644
--- a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
+++ b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
@@ -105,6 +105,11 @@
 
 
   org.apache.hadoop
+  hadoop-aliyun
+  compile
+
+
+  org.apache.hadoop
   hadoop-aws
   compile
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] Git Push Summary

2017-11-02 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/apache-3.0 [deleted] 4b01ae9c6

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] Git Push Summary

2017-11-02 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/apache-trunk [deleted] 0b9eb4c86

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. Contributed by Genmao Yu

2017-11-02 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 12f92e636 -> 6b31a94b0


HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. 
Contributed by Genmao Yu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6b31a94b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6b31a94b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6b31a94b

Branch: refs/heads/branch-3.0
Commit: 6b31a94b01db4e3161070c95ba499591de570045
Parents: 12f92e6
Author: Sammi Chen 
Authored: Thu Nov 2 14:33:22 2017 +0800
Committer: Sammi Chen 
Committed: Thu Nov 2 17:08:53 2017 +0800

--
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6b31a94b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
--
diff --git a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml 
b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
index 4d69e94..46dadcf 100644
--- a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
+++ b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
@@ -105,6 +105,11 @@
 
 
   org.apache.hadoop
+  hadoop-aliyun
+  compile
+
+
+  org.apache.hadoop
   hadoop-aws
   compile
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. Contributed by Genmao Yu

2017-11-02 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk 178751ed8 -> cde56b9ce


HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. 
Contributed by Genmao Yu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cde56b9c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cde56b9c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cde56b9c

Branch: refs/heads/trunk
Commit: cde56b9cefe1eb2943eef56a6aa7fdfa1b78e909
Parents: 178751e
Author: Sammi Chen 
Authored: Thu Nov 2 14:26:16 2017 +0800
Committer: Sammi Chen 
Committed: Thu Nov 2 17:12:04 2017 +0800

--
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cde56b9c/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
--
diff --git a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml 
b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
index 9711e52..73a9d41 100644
--- a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
+++ b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
@@ -105,6 +105,11 @@
 
 
   org.apache.hadoop
+  hadoop-aliyun
+  compile
+
+
+  org.apache.hadoop
   hadoop-aws
   compile
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11600. Refactor TestDFSStripedOutputStreamWithFailure test classes. Contributed by Sammi Chen.

2018-03-13 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk fea16a440 -> ad1b988a8


HDFS-11600. Refactor TestDFSStripedOutputStreamWithFailure test classes. 
Contributed by Sammi Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ad1b988a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ad1b988a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ad1b988a

Branch: refs/heads/trunk
Commit: ad1b988a828608b12cafb6382436cd17f95bfcc5
Parents: fea16a4
Author: Sammi Chen 
Authored: Wed Mar 14 14:39:53 2018 +0800
Committer: Sammi Chen 
Committed: Wed Mar 14 14:39:53 2018 +0800

--
 ...edTestDFSStripedOutputStreamWithFailure.java |  71 +++
 ...tputStreamWithFailureWithRandomECPolicy.java |  49 ++
 .../TestDFSStripedOutputStreamWithFailure.java  | 496 +--
 ...estDFSStripedOutputStreamWithFailure000.java |  24 -
 ...estDFSStripedOutputStreamWithFailure010.java |  24 -
 ...estDFSStripedOutputStreamWithFailure020.java |  24 -
 ...estDFSStripedOutputStreamWithFailure030.java |  24 -
 ...estDFSStripedOutputStreamWithFailure040.java |  24 -
 ...estDFSStripedOutputStreamWithFailure050.java |  24 -
 ...estDFSStripedOutputStreamWithFailure060.java |  24 -
 ...estDFSStripedOutputStreamWithFailure070.java |  24 -
 ...estDFSStripedOutputStreamWithFailure080.java |  24 -
 ...estDFSStripedOutputStreamWithFailure090.java |  24 -
 ...estDFSStripedOutputStreamWithFailure100.java |  24 -
 ...estDFSStripedOutputStreamWithFailure110.java |  24 -
 ...estDFSStripedOutputStreamWithFailure120.java |  24 -
 ...estDFSStripedOutputStreamWithFailure130.java |  24 -
 ...estDFSStripedOutputStreamWithFailure140.java |  24 -
 ...estDFSStripedOutputStreamWithFailure150.java |  24 -
 ...estDFSStripedOutputStreamWithFailure160.java |  24 -
 ...estDFSStripedOutputStreamWithFailure170.java |  24 -
 ...estDFSStripedOutputStreamWithFailure180.java |  24 -
 ...estDFSStripedOutputStreamWithFailure190.java |  24 -
 ...estDFSStripedOutputStreamWithFailure200.java |  24 -
 ...estDFSStripedOutputStreamWithFailure210.java |  24 -
 ...stDFSStripedOutputStreamWithFailureBase.java | 426 
 26 files changed, 554 insertions(+), 1016 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ad1b988a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
new file mode 100644
index 000..284fdb7
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.junit.Assume.assumeTrue;
+
+/**
+ * Test striped file write operation with data node failures with parameterized
+ * test cases.
+ */
+@RunWith(Parameterized.class)
+public class ParameterizedTestDFSStripedOutputStreamWithFailure extends
+TestDFSStripedOutputStreamWithFailureBase{
+  public static final Logger LOG = LoggerFactory.getLogger(
+  ParameterizedTestDFSStripedOutputStreamWithFailure.class);
+
+  private int base;
+
+  @Parameterized.Parameters
+  public static Collection data() {
+List parameters = new ArrayList<>();
+for (int i = 0; i <= 10; i++) {
+  parameters.add(new Object[]{RANDOM.nextInt(220)});
+}
+return parameters;
+  }
+
+  public ParameterizedTestDFSStripedOutputStreamWithFailure(int base) {
+this.

hadoop git commit: HDFS-11600. Refactor TestDFSStripedOutputStreamWithFailure test classes. Contributed by Sammi Chen.

2018-03-14 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 a10506972 -> e97234f27


HDFS-11600. Refactor TestDFSStripedOutputStreamWithFailure test classes. 
Contributed by Sammi Chen.

(cherry picked from commit ad1b988a828608b12cafb6382436cd17f95bfcc5)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e97234f2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e97234f2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e97234f2

Branch: refs/heads/branch-3.0
Commit: e97234f274e8be60507816404c1d2768b836d8f5
Parents: a105069
Author: Sammi Chen 
Authored: Wed Mar 14 14:39:53 2018 +0800
Committer: Sammi Chen 
Committed: Wed Mar 14 14:51:51 2018 +0800

--
 ...edTestDFSStripedOutputStreamWithFailure.java |  71 +++
 ...tputStreamWithFailureWithRandomECPolicy.java |  49 ++
 .../TestDFSStripedOutputStreamWithFailure.java  | 496 +--
 ...estDFSStripedOutputStreamWithFailure000.java |  24 -
 ...estDFSStripedOutputStreamWithFailure010.java |  24 -
 ...estDFSStripedOutputStreamWithFailure020.java |  24 -
 ...estDFSStripedOutputStreamWithFailure030.java |  24 -
 ...estDFSStripedOutputStreamWithFailure040.java |  24 -
 ...estDFSStripedOutputStreamWithFailure050.java |  24 -
 ...estDFSStripedOutputStreamWithFailure060.java |  24 -
 ...estDFSStripedOutputStreamWithFailure070.java |  24 -
 ...estDFSStripedOutputStreamWithFailure080.java |  24 -
 ...estDFSStripedOutputStreamWithFailure090.java |  24 -
 ...estDFSStripedOutputStreamWithFailure100.java |  24 -
 ...estDFSStripedOutputStreamWithFailure110.java |  24 -
 ...estDFSStripedOutputStreamWithFailure120.java |  24 -
 ...estDFSStripedOutputStreamWithFailure130.java |  24 -
 ...estDFSStripedOutputStreamWithFailure140.java |  24 -
 ...estDFSStripedOutputStreamWithFailure150.java |  24 -
 ...estDFSStripedOutputStreamWithFailure160.java |  24 -
 ...estDFSStripedOutputStreamWithFailure170.java |  24 -
 ...estDFSStripedOutputStreamWithFailure180.java |  24 -
 ...estDFSStripedOutputStreamWithFailure190.java |  24 -
 ...estDFSStripedOutputStreamWithFailure200.java |  24 -
 ...estDFSStripedOutputStreamWithFailure210.java |  24 -
 ...stDFSStripedOutputStreamWithFailureBase.java | 426 
 26 files changed, 554 insertions(+), 1016 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e97234f2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
new file mode 100644
index 000..284fdb7
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.junit.Assume.assumeTrue;
+
+/**
+ * Test striped file write operation with data node failures with parameterized
+ * test cases.
+ */
+@RunWith(Parameterized.class)
+public class ParameterizedTestDFSStripedOutputStreamWithFailure extends
+TestDFSStripedOutputStreamWithFailureBase{
+  public static final Logger LOG = LoggerFactory.getLogger(
+  ParameterizedTestDFSStripedOutputStreamWithFailure.class);
+
+  private int base;
+
+  @Parameterized.Parameters
+  public static Collection data() {
+List parameters = new ArrayList<>();
+for (int i = 0; i <= 10; i++) {
+  parameters.add(new Object[]{RANDOM.nextInt(220)});
+}
+return parameters;
+  }
+
+  

hadoop git commit: HDFS-11600. Refactor TestDFSStripedOutputStreamWithFailure test classes. Contributed by Sammi Chen.

2018-03-14 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 c662e688d -> ec951a3dd


HDFS-11600. Refactor TestDFSStripedOutputStreamWithFailure test classes. 
Contributed by Sammi Chen.

(cherry picked from commit ad1b988a828608b12cafb6382436cd17f95bfcc5)
(cherry picked from commit e97234f274e8be60507816404c1d2768b836d8f5)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ec951a3d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ec951a3d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ec951a3d

Branch: refs/heads/branch-3.1
Commit: ec951a3dded58cc5803616c4d1aa70f8c7bac54f
Parents: c662e68
Author: Sammi Chen 
Authored: Wed Mar 14 14:39:53 2018 +0800
Committer: Sammi Chen 
Committed: Wed Mar 14 15:19:05 2018 +0800

--
 ...edTestDFSStripedOutputStreamWithFailure.java |  71 +++
 ...tputStreamWithFailureWithRandomECPolicy.java |  49 ++
 .../TestDFSStripedOutputStreamWithFailure.java  | 496 +--
 ...estDFSStripedOutputStreamWithFailure000.java |  24 -
 ...estDFSStripedOutputStreamWithFailure010.java |  24 -
 ...estDFSStripedOutputStreamWithFailure020.java |  24 -
 ...estDFSStripedOutputStreamWithFailure030.java |  24 -
 ...estDFSStripedOutputStreamWithFailure040.java |  24 -
 ...estDFSStripedOutputStreamWithFailure050.java |  24 -
 ...estDFSStripedOutputStreamWithFailure060.java |  24 -
 ...estDFSStripedOutputStreamWithFailure070.java |  24 -
 ...estDFSStripedOutputStreamWithFailure080.java |  24 -
 ...estDFSStripedOutputStreamWithFailure090.java |  24 -
 ...estDFSStripedOutputStreamWithFailure100.java |  24 -
 ...estDFSStripedOutputStreamWithFailure110.java |  24 -
 ...estDFSStripedOutputStreamWithFailure120.java |  24 -
 ...estDFSStripedOutputStreamWithFailure130.java |  24 -
 ...estDFSStripedOutputStreamWithFailure140.java |  24 -
 ...estDFSStripedOutputStreamWithFailure150.java |  24 -
 ...estDFSStripedOutputStreamWithFailure160.java |  24 -
 ...estDFSStripedOutputStreamWithFailure170.java |  24 -
 ...estDFSStripedOutputStreamWithFailure180.java |  24 -
 ...estDFSStripedOutputStreamWithFailure190.java |  24 -
 ...estDFSStripedOutputStreamWithFailure200.java |  24 -
 ...estDFSStripedOutputStreamWithFailure210.java |  24 -
 ...stDFSStripedOutputStreamWithFailureBase.java | 426 
 26 files changed, 554 insertions(+), 1016 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec951a3d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
new file mode 100644
index 000..284fdb7
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ParameterizedTestDFSStripedOutputStreamWithFailure.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import static org.junit.Assume.assumeTrue;
+
+/**
+ * Test striped file write operation with data node failures with parameterized
+ * test cases.
+ */
+@RunWith(Parameterized.class)
+public class ParameterizedTestDFSStripedOutputStreamWithFailure extends
+TestDFSStripedOutputStreamWithFailureBase{
+  public static final Logger LOG = LoggerFactory.getLogger(
+  ParameterizedTestDFSStripedOutputStreamWithFailure.class);
+
+  private int base;
+
+  @Parameterized.Parameters
+  public static Collection data() {
+List parameters = new ArrayList<>();
+for (int i = 0; i <= 10; i++) {
+  parameters.add(new Obje

hadoop git commit: HADOOP-15262. AliyunOSS: move files under a directory in parallel when rename a directory. Contributed by Jinhu Wu.

2018-03-19 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk 86816da5b -> d67a5e2de


HADOOP-15262. AliyunOSS: move files under a directory in parallel when rename a 
directory. Contributed by Jinhu Wu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d67a5e2d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d67a5e2d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d67a5e2d

Branch: refs/heads/trunk
Commit: d67a5e2dec5c60d96b0c216182891cdfd7832ac5
Parents: 86816da
Author: Sammi Chen 
Authored: Mon Mar 19 15:02:37 2018 +0800
Committer: Sammi Chen 
Committed: Mon Mar 19 15:02:37 2018 +0800

--
 .../fs/aliyun/oss/AliyunOSSCopyFileContext.java |  70 ++
 .../fs/aliyun/oss/AliyunOSSCopyFileTask.java|  65 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  51 +++-
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  16 +++
 .../oss/TestAliyunOSSFileSystemContract.java| 130 +++
 5 files changed, 330 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d67a5e2d/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
new file mode 100644
index 000..a843805
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
@@ -0,0 +1,70 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import java.util.concurrent.locks.Condition;
+import java.util.concurrent.locks.ReentrantLock;
+
+/**
+ * Used by {@link AliyunOSSFileSystem} and {@link AliyunOSSCopyFileTask}
+ * as copy context. It contains some variables used in copy process.
+ */
+public class AliyunOSSCopyFileContext {
+  private final ReentrantLock lock = new ReentrantLock();
+
+  private Condition readyCondition = lock.newCondition();
+
+  private boolean copyFailure;
+  private int copiesFinish;
+
+  public AliyunOSSCopyFileContext() {
+copyFailure = false;
+copiesFinish = 0;
+  }
+
+  public void lock() {
+lock.lock();
+  }
+
+  public void unlock() {
+lock.unlock();
+  }
+
+  public void awaitAllFinish(int copiesToFinish) throws InterruptedException {
+while (this.copiesFinish != copiesToFinish) {
+  readyCondition.await();
+}
+  }
+
+  public void signalAll() {
+readyCondition.signalAll();
+  }
+
+  public boolean isCopyFailure() {
+return copyFailure;
+  }
+
+  public void setCopyFailure(boolean copyFailure) {
+this.copyFailure = copyFailure;
+  }
+
+  public void incCopiesFinish() {
+++copiesFinish;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d67a5e2d/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
new file mode 100644
index 000..42cd17b
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
@@ -0,0 +1,65 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unles

hadoop git commit: HADOOP-15262. AliyunOSS: move files under a directory in parallel when rename a directory. Contributed by Jinhu Wu.

2018-03-19 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 204674f41 -> 2285afb32


HADOOP-15262. AliyunOSS: move files under a directory in parallel when rename a 
directory. Contributed by Jinhu Wu.

(cherry picked from commit d67a5e2dec5c60d96b0c216182891cdfd7832ac5)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2285afb3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2285afb3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2285afb3

Branch: refs/heads/branch-2
Commit: 2285afb32e71622b3dab5051247a1d099cfcbe85
Parents: 204674f
Author: Sammi Chen 
Authored: Mon Mar 19 15:02:37 2018 +0800
Committer: Sammi Chen 
Committed: Mon Mar 19 15:09:23 2018 +0800

--
 .../fs/aliyun/oss/AliyunOSSCopyFileContext.java |  70 ++
 .../fs/aliyun/oss/AliyunOSSCopyFileTask.java|  65 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  51 +++-
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  16 +++
 .../oss/TestAliyunOSSFileSystemContract.java| 130 +++
 5 files changed, 330 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2285afb3/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
new file mode 100644
index 000..a843805
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
@@ -0,0 +1,70 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import java.util.concurrent.locks.Condition;
+import java.util.concurrent.locks.ReentrantLock;
+
+/**
+ * Used by {@link AliyunOSSFileSystem} and {@link AliyunOSSCopyFileTask}
+ * as copy context. It contains some variables used in copy process.
+ */
+public class AliyunOSSCopyFileContext {
+  private final ReentrantLock lock = new ReentrantLock();
+
+  private Condition readyCondition = lock.newCondition();
+
+  private boolean copyFailure;
+  private int copiesFinish;
+
+  public AliyunOSSCopyFileContext() {
+copyFailure = false;
+copiesFinish = 0;
+  }
+
+  public void lock() {
+lock.lock();
+  }
+
+  public void unlock() {
+lock.unlock();
+  }
+
+  public void awaitAllFinish(int copiesToFinish) throws InterruptedException {
+while (this.copiesFinish != copiesToFinish) {
+  readyCondition.await();
+}
+  }
+
+  public void signalAll() {
+readyCondition.signalAll();
+  }
+
+  public boolean isCopyFailure() {
+return copyFailure;
+  }
+
+  public void setCopyFailure(boolean copyFailure) {
+this.copyFailure = copyFailure;
+  }
+
+  public void incCopiesFinish() {
+++copiesFinish;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2285afb3/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
new file mode 100644
index 000..42cd17b
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
@@ -0,0 +1,65 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the L

hadoop git commit: HADOOP-15262. AliyunOSS: move files under a directory in parallel when rename a directory. Contributed by Jinhu Wu.

2018-03-19 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 2b05559b2 -> 322520eb7


HADOOP-15262. AliyunOSS: move files under a directory in parallel when rename a 
directory. Contributed by Jinhu Wu.

(cherry picked from commit d67a5e2dec5c60d96b0c216182891cdfd7832ac5)
(cherry picked from commit 2285afb32e71622b3dab5051247a1d099cfcbe85)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/322520eb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/322520eb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/322520eb

Branch: refs/heads/branch-2.9
Commit: 322520eb76cdcef25190495ccf98b3ca39907f58
Parents: 2b05559
Author: Sammi Chen 
Authored: Mon Mar 19 15:02:37 2018 +0800
Committer: Sammi Chen 
Committed: Mon Mar 19 15:22:55 2018 +0800

--
 .../fs/aliyun/oss/AliyunOSSCopyFileContext.java |  70 ++
 .../fs/aliyun/oss/AliyunOSSCopyFileTask.java|  65 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  51 +++-
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  16 +++
 .../oss/TestAliyunOSSFileSystemContract.java| 130 +++
 5 files changed, 330 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/322520eb/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
new file mode 100644
index 000..a843805
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
@@ -0,0 +1,70 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import java.util.concurrent.locks.Condition;
+import java.util.concurrent.locks.ReentrantLock;
+
+/**
+ * Used by {@link AliyunOSSFileSystem} and {@link AliyunOSSCopyFileTask}
+ * as copy context. It contains some variables used in copy process.
+ */
+public class AliyunOSSCopyFileContext {
+  private final ReentrantLock lock = new ReentrantLock();
+
+  private Condition readyCondition = lock.newCondition();
+
+  private boolean copyFailure;
+  private int copiesFinish;
+
+  public AliyunOSSCopyFileContext() {
+copyFailure = false;
+copiesFinish = 0;
+  }
+
+  public void lock() {
+lock.lock();
+  }
+
+  public void unlock() {
+lock.unlock();
+  }
+
+  public void awaitAllFinish(int copiesToFinish) throws InterruptedException {
+while (this.copiesFinish != copiesToFinish) {
+  readyCondition.await();
+}
+  }
+
+  public void signalAll() {
+readyCondition.signalAll();
+  }
+
+  public boolean isCopyFailure() {
+return copyFailure;
+  }
+
+  public void setCopyFailure(boolean copyFailure) {
+this.copyFailure = copyFailure;
+  }
+
+  public void incCopiesFinish() {
+++copiesFinish;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/322520eb/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
new file mode 100644
index 000..42cd17b
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
@@ -0,0 +1,65 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file e

hadoop git commit: HADOOP-15262. AliyunOSS: move files under a directory in parallel when rename a directory. Contributed by Jinhu Wu.

2018-03-19 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 c6b1125a2 -> 7985c5fdc


HADOOP-15262. AliyunOSS: move files under a directory in parallel when rename a 
directory. Contributed by Jinhu Wu.

(cherry picked from commit d67a5e2dec5c60d96b0c216182891cdfd7832ac5)
(cherry picked from commit 2285afb32e71622b3dab5051247a1d099cfcbe85)
(cherry picked from commit 322520eb76cdcef25190495ccf98b3ca39907f58)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7985c5fd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7985c5fd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7985c5fd

Branch: refs/heads/branch-3.0
Commit: 7985c5fdc9fe26e7771a86a5004bf90fcbd8af71
Parents: c6b1125
Author: Sammi Chen 
Authored: Mon Mar 19 15:02:37 2018 +0800
Committer: Sammi Chen 
Committed: Mon Mar 19 15:44:15 2018 +0800

--
 .../fs/aliyun/oss/AliyunOSSCopyFileContext.java |  70 ++
 .../fs/aliyun/oss/AliyunOSSCopyFileTask.java|  65 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  51 +++-
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  16 +++
 .../oss/TestAliyunOSSFileSystemContract.java| 130 +++
 5 files changed, 330 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7985c5fd/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
new file mode 100644
index 000..a843805
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
@@ -0,0 +1,70 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import java.util.concurrent.locks.Condition;
+import java.util.concurrent.locks.ReentrantLock;
+
+/**
+ * Used by {@link AliyunOSSFileSystem} and {@link AliyunOSSCopyFileTask}
+ * as copy context. It contains some variables used in copy process.
+ */
+public class AliyunOSSCopyFileContext {
+  private final ReentrantLock lock = new ReentrantLock();
+
+  private Condition readyCondition = lock.newCondition();
+
+  private boolean copyFailure;
+  private int copiesFinish;
+
+  public AliyunOSSCopyFileContext() {
+copyFailure = false;
+copiesFinish = 0;
+  }
+
+  public void lock() {
+lock.lock();
+  }
+
+  public void unlock() {
+lock.unlock();
+  }
+
+  public void awaitAllFinish(int copiesToFinish) throws InterruptedException {
+while (this.copiesFinish != copiesToFinish) {
+  readyCondition.await();
+}
+  }
+
+  public void signalAll() {
+readyCondition.signalAll();
+  }
+
+  public boolean isCopyFailure() {
+return copyFailure;
+  }
+
+  public void setCopyFailure(boolean copyFailure) {
+this.copyFailure = copyFailure;
+  }
+
+  public void incCopiesFinish() {
+++copiesFinish;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7985c5fd/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
new file mode 100644
index 000..42cd17b
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
@@ -0,0 +1,65 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache 

hadoop git commit: HADOOP-15262. AliyunOSS: move files under a directory in parallel when rename a directory. Contributed by Jinhu Wu.

2018-03-19 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 7eb0bdbd3 -> 5acc13f34


HADOOP-15262. AliyunOSS: move files under a directory in parallel when rename a 
directory. Contributed by Jinhu Wu.

(cherry picked from commit d67a5e2dec5c60d96b0c216182891cdfd7832ac5)
(cherry picked from commit 2285afb32e71622b3dab5051247a1d099cfcbe85)
(cherry picked from commit 322520eb76cdcef25190495ccf98b3ca39907f58)
(cherry picked from commit 7985c5fdc9fe26e7771a86a5004bf90fcbd8af71)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5acc13f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5acc13f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5acc13f3

Branch: refs/heads/branch-3.1
Commit: 5acc13f3406af02b98406716a692a1899052924a
Parents: 7eb0bdb
Author: Sammi Chen 
Authored: Mon Mar 19 15:02:37 2018 +0800
Committer: Sammi Chen 
Committed: Tue Mar 20 14:03:17 2018 +0800

--
 .../fs/aliyun/oss/AliyunOSSCopyFileContext.java |  70 ++
 .../fs/aliyun/oss/AliyunOSSCopyFileTask.java|  65 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  51 +++-
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  16 +++
 .../oss/TestAliyunOSSFileSystemContract.java| 130 +++
 5 files changed, 330 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5acc13f3/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
new file mode 100644
index 000..a843805
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileContext.java
@@ -0,0 +1,70 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import java.util.concurrent.locks.Condition;
+import java.util.concurrent.locks.ReentrantLock;
+
+/**
+ * Used by {@link AliyunOSSFileSystem} and {@link AliyunOSSCopyFileTask}
+ * as copy context. It contains some variables used in copy process.
+ */
+public class AliyunOSSCopyFileContext {
+  private final ReentrantLock lock = new ReentrantLock();
+
+  private Condition readyCondition = lock.newCondition();
+
+  private boolean copyFailure;
+  private int copiesFinish;
+
+  public AliyunOSSCopyFileContext() {
+copyFailure = false;
+copiesFinish = 0;
+  }
+
+  public void lock() {
+lock.lock();
+  }
+
+  public void unlock() {
+lock.unlock();
+  }
+
+  public void awaitAllFinish(int copiesToFinish) throws InterruptedException {
+while (this.copiesFinish != copiesToFinish) {
+  readyCondition.await();
+}
+  }
+
+  public void signalAll() {
+readyCondition.signalAll();
+  }
+
+  public boolean isCopyFailure() {
+return copyFailure;
+  }
+
+  public void setCopyFailure(boolean copyFailure) {
+this.copyFailure = copyFailure;
+  }
+
+  public void incCopiesFinish() {
+++copiesFinish;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5acc13f3/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
new file mode 100644
index 000..42cd17b
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSCopyFileTask.java
@@ -0,0 +1,65 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyrigh

hadoop git commit: HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading mechanism. Contributed by Genmao Yu.

2018-03-30 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2216bde32 -> 6542d17ea


HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading 
mechanism. Contributed by Genmao Yu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6542d17e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6542d17e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6542d17e

Branch: refs/heads/trunk
Commit: 6542d17ea460ec222137c4b275b13daf15d3fca3
Parents: 2216bde
Author: Sammi Chen 
Authored: Fri Mar 30 20:23:05 2018 +0800
Committer: Sammi Chen 
Committed: Fri Mar 30 20:23:05 2018 +0800

--
 .../aliyun/oss/AliyunCredentialsProvider.java   |   3 +-
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  | 206 +++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  34 ++-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 173 
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 --
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 115 ---
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  22 +-
 .../oss/TestAliyunOSSBlockOutputStream.java | 115 +++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  10 +-
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 
 .../contract/TestAliyunOSSContractDistCp.java   |   2 +-
 11 files changed, 544 insertions(+), 338 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6542d17e/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
index b46c67a..58c14a9 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
@@ -35,8 +35,7 @@ import static org.apache.hadoop.fs.aliyun.oss.Constants.*;
 public class AliyunCredentialsProvider implements CredentialsProvider {
   private Credentials credentials = null;
 
-  public AliyunCredentialsProvider(Configuration conf)
-  throws IOException {
+  public AliyunCredentialsProvider(Configuration conf) throws IOException {
 String accessKeyId;
 String accessKeySecret;
 String securityToken;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6542d17e/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
new file mode 100644
index 000..12d551b
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -0,0 +1,206 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import com.aliyun.oss.model.PartETag;
+import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.common.util.concurrent.ListeningExecutorService;
+import com.google.common.util.concurrent.MoreExecutors;
+import org.apache.hadoop.conf.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.BufferedOutputStream;
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+
+/**
+ * Asynchronous multi-part based uploading mechanism to

hadoop git commit: HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading mechanism. Contributed by Genmao Yu.

2018-03-30 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 99b5b9dce -> e96c7bf82


HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading 
mechanism. Contributed by Genmao Yu.

(cherry picked from commit 6542d17ea460ec222137c4b275b13daf15d3fca3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e96c7bf8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e96c7bf8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e96c7bf8

Branch: refs/heads/branch-3.1
Commit: e96c7bf82de1e9fd97df5fb6b763e211ebad5913
Parents: 99b5b9d
Author: Sammi Chen 
Authored: Fri Mar 30 20:23:05 2018 +0800
Committer: Sammi Chen 
Committed: Fri Mar 30 20:26:35 2018 +0800

--
 .../aliyun/oss/AliyunCredentialsProvider.java   |   3 +-
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  | 206 +++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  34 ++-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 173 
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 --
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 115 ---
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  22 +-
 .../oss/TestAliyunOSSBlockOutputStream.java | 115 +++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  10 +-
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 
 .../contract/TestAliyunOSSContractDistCp.java   |   2 +-
 11 files changed, 544 insertions(+), 338 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e96c7bf8/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
index b46c67a..58c14a9 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
@@ -35,8 +35,7 @@ import static org.apache.hadoop.fs.aliyun.oss.Constants.*;
 public class AliyunCredentialsProvider implements CredentialsProvider {
   private Credentials credentials = null;
 
-  public AliyunCredentialsProvider(Configuration conf)
-  throws IOException {
+  public AliyunCredentialsProvider(Configuration conf) throws IOException {
 String accessKeyId;
 String accessKeySecret;
 String securityToken;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e96c7bf8/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
new file mode 100644
index 000..12d551b
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -0,0 +1,206 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import com.aliyun.oss.model.PartETag;
+import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.common.util.concurrent.ListeningExecutorService;
+import com.google.common.util.concurrent.MoreExecutors;
+import org.apache.hadoop.conf.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.BufferedOutputStream;
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.

hadoop git commit: Change version from 2.9.1-SNAPSHOT to 2.9.1, preparing for 2.9.1 release

2018-04-01 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9.1 [created] 33f8e80a9


Change version from 2.9.1-SNAPSHOT to 2.9.1, preparing for 2.9.1 release


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/33f8e80a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/33f8e80a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/33f8e80a

Branch: refs/heads/branch-2.9.1
Commit: 33f8e80a9e4b669628fe76fbf60563e4881620dd
Parents: 72bc5a8
Author: Sammi Chen 
Authored: Sun Apr 1 18:59:06 2018 +0800
Committer: Sammi Chen 
Committed: Sun Apr 1 18:59:06 2018 +0800

--
 hadoop-assemblies/pom.xml| 4 ++--
 hadoop-build-tools/pom.xml   | 2 +-
 hadoop-client/pom.xml| 4 ++--
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml| 4 ++--
 hadoop-cloud-storage-project/pom.xml | 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml | 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml   | 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml| 4 ++--
 hadoop-common-project/hadoop-common/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml | 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml | 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml | 4 ++--
 hadoop-common-project/pom.xml| 4 ++--
 hadoop-dist/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml| 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml| 4 ++--
 hadoop-hdfs-project/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client-common/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml | 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client-jobclient/pom.xml| 4 ++--
 .../hadoop-mapreduce-client-shuffle/pom.xml  | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml   | 4 ++--
 hadoop-mapreduce-project/pom.xml | 4 ++--
 hadoop-maven-plugins/pom.xml | 2 +-
 hadoop-minicluster/pom.xml   | 4 ++--
 hadoop-project-dist/pom.xml  | 4 ++--
 hadoop-project/pom.xml   | 4 ++--
 hadoop-tools/hadoop-aliyun/pom.xml   | 2 +-
 hadoop-tools/hadoop-ant/pom.xml  | 4 ++--
 hadoop-tools/hadoop-archive-logs/pom.xml | 4 ++--
 hadoop-tools/hadoop-archives/pom.xml | 4 ++--
 hadoop-tools/hadoop-aws/pom.xml  | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml   | 2 +-
 hadoop-tools/hadoop-azure/pom.xml| 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml | 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml   | 4 ++--
 hadoop-tools/hadoop-extras/pom.xml   | 4 ++--
 hadoop-tools/hadoop-gridmix/pom.xml  | 4 ++--
 hadoop-tools/hadoop-openstack/pom.xml| 4 ++--
 hadoop-tools/hadoop-pipes/pom.xml| 4 ++--
 hadoop-tools/hadoop-resourceestimator/pom.xml| 4 ++--
 hadoop-tools/hadoop-rumen/pom.xml| 4 ++--
 hadoop-tools/hadoop-sls/pom.xml  | 4 ++--
 hadoop-tools/hadoop-streaming/pom.xml| 4 ++--
 hadoop-tools/hadoop-tools-dist/pom.xml   | 4 ++--
 hadoop-tools/pom.xml | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml  | 4 ++--
 .../hadoop-yarn-applications-distributedshell/pom.xml| 4 ++--
 .../hadoop-yarn-

hadoop git commit: Preparing for 2.9.2 development

2018-04-01 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 72bc5a86a -> 7feb9b99a


Preparing for 2.9.2 development


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7feb9b99
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7feb9b99
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7feb9b99

Branch: refs/heads/branch-2.9
Commit: 7feb9b99a486df7c3ab9142b2bfe84cf780891b1
Parents: 72bc5a8
Author: Sammi Chen 
Authored: Sun Apr 1 19:36:24 2018 +0800
Committer: Sammi Chen 
Committed: Sun Apr 1 19:36:24 2018 +0800

--
 hadoop-assemblies/pom.xml| 4 ++--
 hadoop-build-tools/pom.xml   | 2 +-
 hadoop-client/pom.xml| 4 ++--
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml| 4 ++--
 hadoop-cloud-storage-project/pom.xml | 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml | 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml   | 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml| 4 ++--
 hadoop-common-project/hadoop-common/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml | 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml | 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml | 4 ++--
 hadoop-common-project/pom.xml| 4 ++--
 hadoop-dist/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml| 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml| 4 ++--
 hadoop-hdfs-project/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client-common/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml | 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client-jobclient/pom.xml| 4 ++--
 .../hadoop-mapreduce-client-shuffle/pom.xml  | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml   | 4 ++--
 hadoop-mapreduce-project/pom.xml | 4 ++--
 hadoop-maven-plugins/pom.xml | 2 +-
 hadoop-minicluster/pom.xml   | 4 ++--
 hadoop-project-dist/pom.xml  | 4 ++--
 hadoop-project/pom.xml   | 4 ++--
 hadoop-tools/hadoop-aliyun/pom.xml   | 2 +-
 hadoop-tools/hadoop-ant/pom.xml  | 4 ++--
 hadoop-tools/hadoop-archive-logs/pom.xml | 4 ++--
 hadoop-tools/hadoop-archives/pom.xml | 4 ++--
 hadoop-tools/hadoop-aws/pom.xml  | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml   | 2 +-
 hadoop-tools/hadoop-azure/pom.xml| 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml | 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml   | 4 ++--
 hadoop-tools/hadoop-extras/pom.xml   | 4 ++--
 hadoop-tools/hadoop-gridmix/pom.xml  | 4 ++--
 hadoop-tools/hadoop-openstack/pom.xml| 4 ++--
 hadoop-tools/hadoop-pipes/pom.xml| 4 ++--
 hadoop-tools/hadoop-resourceestimator/pom.xml| 4 ++--
 hadoop-tools/hadoop-rumen/pom.xml| 4 ++--
 hadoop-tools/hadoop-sls/pom.xml  | 4 ++--
 hadoop-tools/hadoop-streaming/pom.xml| 4 ++--
 hadoop-tools/hadoop-tools-dist/pom.xml   | 4 ++--
 hadoop-tools/pom.xml | 4 ++--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml  | 4 ++--
 .../hadoop-yarn-applications-distributedshell/pom.xml| 4 ++--
 .../hadoop-yarn-applications-unmanaged-am-launcher/pom.xml

hadoop git commit: HADOOP-15607. AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream. Contributed by Jinhu Wu.

2018-07-29 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk 007e6f511 -> 0857f116b


HADOOP-15607. AliyunOSS: fix duplicated partNumber issue in 
AliyunOSSBlockOutputStream. Contributed by Jinhu Wu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0857f116
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0857f116
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0857f116

Branch: refs/heads/trunk
Commit: 0857f116b754d83d3c540cd6f989087af24fef27
Parents: 007e6f5
Author: Sammi Chen 
Authored: Mon Jul 30 10:53:44 2018 +0800
Committer: Sammi Chen 
Committed: Mon Jul 30 10:53:44 2018 +0800

--
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  | 59 
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java |  2 +
 .../oss/TestAliyunOSSBlockOutputStream.java | 12 +++-
 3 files changed, 49 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0857f116/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
index 12d551b..0a833b2 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -33,7 +33,9 @@ import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.OutputStream;
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
 
@@ -50,7 +52,7 @@ public class AliyunOSSBlockOutputStream extends OutputStream {
   private boolean closed;
   private String key;
   private File blockFile;
-  private List blockFiles = new ArrayList<>();
+  private Map blockFiles = new HashMap<>();
   private long blockSize;
   private int blockId = 0;
   private long blockWritten = 0L;
@@ -94,8 +96,9 @@ public class AliyunOSSBlockOutputStream extends OutputStream {
 
 blockStream.flush();
 blockStream.close();
-if (!blockFiles.contains(blockFile)) {
-  blockFiles.add(blockFile);
+if (!blockFiles.values().contains(blockFile)) {
+  blockId++;
+  blockFiles.put(blockId, blockFile);
 }
 
 try {
@@ -107,7 +110,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
   ListenableFuture partETagFuture =
   executorService.submit(() -> {
 PartETag partETag = store.uploadPart(blockFile, key, uploadId,
-blockId + 1);
+blockId);
 return partETag;
   });
   partETagsFutures.add(partETagFuture);
@@ -120,11 +123,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 store.completeMultipartUpload(key, uploadId, partETags);
   }
 } finally {
-  for (File tFile: blockFiles) {
-if (tFile.exists() && !tFile.delete()) {
-  LOG.warn("Failed to delete temporary file {}", tFile);
-}
-  }
+  removePartFiles();
   closed = true;
 }
   }
@@ -141,38 +140,52 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 if (closed) {
   throw new IOException("Stream closed.");
 }
-try {
-  blockStream.write(b, off, len);
-  blockWritten += len;
-  if (blockWritten >= blockSize) {
-uploadCurrentPart();
-blockWritten = 0L;
+blockStream.write(b, off, len);
+blockWritten += len;
+if (blockWritten >= blockSize) {
+  uploadCurrentPart();
+  blockWritten = 0L;
+}
+  }
+
+  private void removePartFiles() throws IOException {
+for (ListenableFuture partETagFuture : partETagsFutures) {
+  if (!partETagFuture.isDone()) {
+continue;
   }
-} finally {
-  for (File tFile: blockFiles) {
-if (tFile.exists() && !tFile.delete()) {
-  LOG.warn("Failed to delete temporary file {}", tFile);
+
+  try {
+File blockFile = blockFiles.get(partETagFuture.get().getPartNumber());
+if (blockFile != null && blockFile.exists() && !blockFile.delete()) {
+  LOG.warn("Failed to delete temporary file {}", blockFile);
 }
+  } catch (InterruptedException | ExecutionException e) {
+throw new IOException(e);
   }
 }
   }
 
   private void uploadCurrentPart() throws IOException {
-blockFiles.add(blockFile);
 blockStream.flush();
 blockStream.close();
 i

hadoop git commit: HADOOP-15607. AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream. Contributed by Jinhu Wu.

2018-07-29 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 eb40d4fab -> f9aedf32e


HADOOP-15607. AliyunOSS: fix duplicated partNumber issue in 
AliyunOSSBlockOutputStream. Contributed by Jinhu Wu.

(cherry picked from commit 0857f116b754d83d3c540cd6f989087af24fef27)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f9aedf32
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f9aedf32
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f9aedf32

Branch: refs/heads/branch-3.0
Commit: f9aedf32ecdc6075c8760220cdfadecb9f1b6738
Parents: eb40d4f
Author: Sammi Chen 
Authored: Mon Jul 30 10:53:44 2018 +0800
Committer: Sammi Chen 
Committed: Mon Jul 30 10:56:31 2018 +0800

--
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  | 59 
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java |  2 +
 .../oss/TestAliyunOSSBlockOutputStream.java | 12 +++-
 3 files changed, 49 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f9aedf32/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
index 12d551b..0a833b2 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -33,7 +33,9 @@ import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.OutputStream;
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
 
@@ -50,7 +52,7 @@ public class AliyunOSSBlockOutputStream extends OutputStream {
   private boolean closed;
   private String key;
   private File blockFile;
-  private List blockFiles = new ArrayList<>();
+  private Map blockFiles = new HashMap<>();
   private long blockSize;
   private int blockId = 0;
   private long blockWritten = 0L;
@@ -94,8 +96,9 @@ public class AliyunOSSBlockOutputStream extends OutputStream {
 
 blockStream.flush();
 blockStream.close();
-if (!blockFiles.contains(blockFile)) {
-  blockFiles.add(blockFile);
+if (!blockFiles.values().contains(blockFile)) {
+  blockId++;
+  blockFiles.put(blockId, blockFile);
 }
 
 try {
@@ -107,7 +110,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
   ListenableFuture partETagFuture =
   executorService.submit(() -> {
 PartETag partETag = store.uploadPart(blockFile, key, uploadId,
-blockId + 1);
+blockId);
 return partETag;
   });
   partETagsFutures.add(partETagFuture);
@@ -120,11 +123,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 store.completeMultipartUpload(key, uploadId, partETags);
   }
 } finally {
-  for (File tFile: blockFiles) {
-if (tFile.exists() && !tFile.delete()) {
-  LOG.warn("Failed to delete temporary file {}", tFile);
-}
-  }
+  removePartFiles();
   closed = true;
 }
   }
@@ -141,38 +140,52 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 if (closed) {
   throw new IOException("Stream closed.");
 }
-try {
-  blockStream.write(b, off, len);
-  blockWritten += len;
-  if (blockWritten >= blockSize) {
-uploadCurrentPart();
-blockWritten = 0L;
+blockStream.write(b, off, len);
+blockWritten += len;
+if (blockWritten >= blockSize) {
+  uploadCurrentPart();
+  blockWritten = 0L;
+}
+  }
+
+  private void removePartFiles() throws IOException {
+for (ListenableFuture partETagFuture : partETagsFutures) {
+  if (!partETagFuture.isDone()) {
+continue;
   }
-} finally {
-  for (File tFile: blockFiles) {
-if (tFile.exists() && !tFile.delete()) {
-  LOG.warn("Failed to delete temporary file {}", tFile);
+
+  try {
+File blockFile = blockFiles.get(partETagFuture.get().getPartNumber());
+if (blockFile != null && blockFile.exists() && !blockFile.delete()) {
+  LOG.warn("Failed to delete temporary file {}", blockFile);
 }
+  } catch (InterruptedException | ExecutionException e) {
+throw new IOException(e);
   }
 }
   }
 
   private void uploadCurrentPart() throws IOException {
-block

hadoop git commit: HADOOP-15607. AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream. Contributed by Jinhu Wu.

2018-07-29 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 2e7876a72 -> 42e34dae5


HADOOP-15607. AliyunOSS: fix duplicated partNumber issue in 
AliyunOSSBlockOutputStream. Contributed by Jinhu Wu.

(cherry picked from commit 0857f116b754d83d3c540cd6f989087af24fef27)
(cherry picked from commit f9aedf32ecdc6075c8760220cdfadecb9f1b6738)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/42e34dae
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/42e34dae
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/42e34dae

Branch: refs/heads/branch-3.1
Commit: 42e34dae57cf70b9b1e0ebf588d5ca5c84ce99cb
Parents: 2e7876a
Author: Sammi Chen 
Authored: Mon Jul 30 10:53:44 2018 +0800
Committer: Sammi Chen 
Committed: Mon Jul 30 11:00:30 2018 +0800

--
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  | 59 
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java |  2 +
 .../oss/TestAliyunOSSBlockOutputStream.java | 12 +++-
 3 files changed, 49 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/42e34dae/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
index 12d551b..0a833b2 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -33,7 +33,9 @@ import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.OutputStream;
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
 
@@ -50,7 +52,7 @@ public class AliyunOSSBlockOutputStream extends OutputStream {
   private boolean closed;
   private String key;
   private File blockFile;
-  private List blockFiles = new ArrayList<>();
+  private Map blockFiles = new HashMap<>();
   private long blockSize;
   private int blockId = 0;
   private long blockWritten = 0L;
@@ -94,8 +96,9 @@ public class AliyunOSSBlockOutputStream extends OutputStream {
 
 blockStream.flush();
 blockStream.close();
-if (!blockFiles.contains(blockFile)) {
-  blockFiles.add(blockFile);
+if (!blockFiles.values().contains(blockFile)) {
+  blockId++;
+  blockFiles.put(blockId, blockFile);
 }
 
 try {
@@ -107,7 +110,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
   ListenableFuture partETagFuture =
   executorService.submit(() -> {
 PartETag partETag = store.uploadPart(blockFile, key, uploadId,
-blockId + 1);
+blockId);
 return partETag;
   });
   partETagsFutures.add(partETagFuture);
@@ -120,11 +123,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 store.completeMultipartUpload(key, uploadId, partETags);
   }
 } finally {
-  for (File tFile: blockFiles) {
-if (tFile.exists() && !tFile.delete()) {
-  LOG.warn("Failed to delete temporary file {}", tFile);
-}
-  }
+  removePartFiles();
   closed = true;
 }
   }
@@ -141,38 +140,52 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 if (closed) {
   throw new IOException("Stream closed.");
 }
-try {
-  blockStream.write(b, off, len);
-  blockWritten += len;
-  if (blockWritten >= blockSize) {
-uploadCurrentPart();
-blockWritten = 0L;
+blockStream.write(b, off, len);
+blockWritten += len;
+if (blockWritten >= blockSize) {
+  uploadCurrentPart();
+  blockWritten = 0L;
+}
+  }
+
+  private void removePartFiles() throws IOException {
+for (ListenableFuture partETagFuture : partETagsFutures) {
+  if (!partETagFuture.isDone()) {
+continue;
   }
-} finally {
-  for (File tFile: blockFiles) {
-if (tFile.exists() && !tFile.delete()) {
-  LOG.warn("Failed to delete temporary file {}", tFile);
+
+  try {
+File blockFile = blockFiles.get(partETagFuture.get().getPartNumber());
+if (blockFile != null && blockFile.exists() && !blockFile.delete()) {
+  LOG.warn("Failed to delete temporary file {}", blockFile);
 }
+  } catch (InterruptedException | ExecutionException e) {
+throw new IOException(e);
   }
 }
   }

hadoop git commit: HADOOP-15607. AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream. Contributed by Jinhu Wu.

2018-08-01 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 21e416ad2 -> 418e957c6


HADOOP-15607. AliyunOSS: fix duplicated partNumber issue in 
AliyunOSSBlockOutputStream. Contributed by Jinhu Wu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/418e957c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/418e957c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/418e957c

Branch: refs/heads/branch-2
Commit: 418e957c64cc31f13ea07c1b9d47208dcb4b4101
Parents: 21e416a
Author: Sammi Chen 
Authored: Thu Aug 2 10:13:22 2018 +0800
Committer: Sammi Chen 
Committed: Thu Aug 2 10:14:54 2018 +0800

--
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  | 59 
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java |  2 +
 .../oss/TestAliyunOSSBlockOutputStream.java | 12 +++-
 3 files changed, 49 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/418e957c/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
index 2d9a13b..42cb0b1 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -33,7 +33,9 @@ import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.OutputStream;
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
@@ -51,7 +53,7 @@ public class AliyunOSSBlockOutputStream extends OutputStream {
   private boolean closed;
   private String key;
   private File blockFile;
-  private List blockFiles = new ArrayList<>();
+  private Map blockFiles = new HashMap<>();
   private long blockSize;
   private int blockId = 0;
   private long blockWritten = 0L;
@@ -95,8 +97,9 @@ public class AliyunOSSBlockOutputStream extends OutputStream {
 
 blockStream.flush();
 blockStream.close();
-if (!blockFiles.contains(blockFile)) {
-  blockFiles.add(blockFile);
+if (!blockFiles.values().contains(blockFile)) {
+  blockId++;
+  blockFiles.put(blockId, blockFile);
 }
 
 try {
@@ -110,7 +113,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 @Override
 public PartETag call() throws Exception {
   PartETag partETag = store.uploadPart(blockFile, key, 
uploadId,
-  blockId + 1);
+  blockId);
   return partETag;
 }
   });
@@ -124,11 +127,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 store.completeMultipartUpload(key, uploadId, partETags);
   }
 } finally {
-  for (File tFile: blockFiles) {
-if (tFile.exists() && !tFile.delete()) {
-  LOG.warn("Failed to delete temporary file {}", tFile);
-}
-  }
+  removePartFiles();
   closed = true;
 }
   }
@@ -145,41 +144,55 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 if (closed) {
   throw new IOException("Stream closed.");
 }
-try {
-  blockStream.write(b, off, len);
-  blockWritten += len;
-  if (blockWritten >= blockSize) {
-uploadCurrentPart();
-blockWritten = 0L;
+blockStream.write(b, off, len);
+blockWritten += len;
+if (blockWritten >= blockSize) {
+  uploadCurrentPart();
+  blockWritten = 0L;
+}
+  }
+
+  private void removePartFiles() throws IOException {
+for (ListenableFuture partETagFuture : partETagsFutures) {
+  if (!partETagFuture.isDone()) {
+continue;
   }
-} finally {
-  for (File tFile: blockFiles) {
-if (tFile.exists() && !tFile.delete()) {
-  LOG.warn("Failed to delete temporary file {}", tFile);
+
+  try {
+File blockFile = blockFiles.get(partETagFuture.get().getPartNumber());
+if (blockFile != null && blockFile.exists() && !blockFile.delete()) {
+  LOG.warn("Failed to delete temporary file {}", blockFile);
 }
+  } catch (InterruptedException | ExecutionException e) {
+throw new IOException(e);
   }
 }
   }
 
   private void uploadCurrentPart() throws IOException {
-blockFiles.add(blockFile);
 blockStream.flush();
 blockStream.

hadoop git commit: HADOOP-15607. AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream. Contributed by Jinhu Wu.

2018-08-01 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 d9b9c9125 -> cde4d8697


HADOOP-15607. AliyunOSS: fix duplicated partNumber issue in 
AliyunOSSBlockOutputStream. Contributed by Jinhu Wu.

(cherry picked from commit 418e957c64cc31f13ea07c1b9d47208dcb4b4101)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cde4d869
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cde4d869
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cde4d869

Branch: refs/heads/branch-2.9
Commit: cde4d8697833abbfe51761bb995d0592a0b3dfa2
Parents: d9b9c91
Author: Sammi Chen 
Authored: Thu Aug 2 10:13:22 2018 +0800
Committer: Sammi Chen 
Committed: Thu Aug 2 10:27:38 2018 +0800

--
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  | 59 
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java |  2 +
 .../oss/TestAliyunOSSBlockOutputStream.java | 12 +++-
 3 files changed, 49 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cde4d869/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
index 2d9a13b..42cb0b1 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -33,7 +33,9 @@ import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.OutputStream;
 import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorService;
@@ -51,7 +53,7 @@ public class AliyunOSSBlockOutputStream extends OutputStream {
   private boolean closed;
   private String key;
   private File blockFile;
-  private List blockFiles = new ArrayList<>();
+  private Map blockFiles = new HashMap<>();
   private long blockSize;
   private int blockId = 0;
   private long blockWritten = 0L;
@@ -95,8 +97,9 @@ public class AliyunOSSBlockOutputStream extends OutputStream {
 
 blockStream.flush();
 blockStream.close();
-if (!blockFiles.contains(blockFile)) {
-  blockFiles.add(blockFile);
+if (!blockFiles.values().contains(blockFile)) {
+  blockId++;
+  blockFiles.put(blockId, blockFile);
 }
 
 try {
@@ -110,7 +113,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 @Override
 public PartETag call() throws Exception {
   PartETag partETag = store.uploadPart(blockFile, key, 
uploadId,
-  blockId + 1);
+  blockId);
   return partETag;
 }
   });
@@ -124,11 +127,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 store.completeMultipartUpload(key, uploadId, partETags);
   }
 } finally {
-  for (File tFile: blockFiles) {
-if (tFile.exists() && !tFile.delete()) {
-  LOG.warn("Failed to delete temporary file {}", tFile);
-}
-  }
+  removePartFiles();
   closed = true;
 }
   }
@@ -145,41 +144,55 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 if (closed) {
   throw new IOException("Stream closed.");
 }
-try {
-  blockStream.write(b, off, len);
-  blockWritten += len;
-  if (blockWritten >= blockSize) {
-uploadCurrentPart();
-blockWritten = 0L;
+blockStream.write(b, off, len);
+blockWritten += len;
+if (blockWritten >= blockSize) {
+  uploadCurrentPart();
+  blockWritten = 0L;
+}
+  }
+
+  private void removePartFiles() throws IOException {
+for (ListenableFuture partETagFuture : partETagsFutures) {
+  if (!partETagFuture.isDone()) {
+continue;
   }
-} finally {
-  for (File tFile: blockFiles) {
-if (tFile.exists() && !tFile.delete()) {
-  LOG.warn("Failed to delete temporary file {}", tFile);
+
+  try {
+File blockFile = blockFiles.get(partETagFuture.get().getPartNumber());
+if (blockFile != null && blockFile.exists() && !blockFile.delete()) {
+  LOG.warn("Failed to delete temporary file {}", blockFile);
 }
+  } catch (InterruptedException | ExecutionException e) {
+throw new IOException(e);
   }
 }
   }
 
   private void uploadCurrentPart() throws IOException {
-

hadoop git commit: HADOOP-15499. Performance severe drops when running RawErasureCoderBenchmark with NativeRSRawErasureCoder. Contributed by Sammi Chen.

2018-06-10 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk ccfb816d3 -> 18201b882


HADOOP-15499. Performance severe drops when running RawErasureCoderBenchmark 
with NativeRSRawErasureCoder. Contributed by Sammi Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/18201b88
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/18201b88
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/18201b88

Branch: refs/heads/trunk
Commit: 18201b882a38ad875358c5d23c09b0ef903c2f91
Parents: ccfb816
Author: Sammi Chen 
Authored: Mon Jun 11 13:53:37 2018 +0800
Committer: Sammi Chen 
Committed: Mon Jun 11 13:53:37 2018 +0800

--
 .../rawcoder/AbstractNativeRawDecoder.java  | 51 
 .../rawcoder/AbstractNativeRawEncoder.java  | 49 +++
 .../rawcoder/NativeRSRawDecoder.java| 19 ++--
 .../rawcoder/NativeRSRawEncoder.java| 19 ++--
 .../rawcoder/NativeXORRawDecoder.java   | 19 ++--
 .../rawcoder/NativeXORRawEncoder.java   | 19 ++--
 .../rawcoder/RawErasureCoderBenchmark.java  |  6 +++
 7 files changed, 127 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/18201b88/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
index e845747..cb71a80 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
@@ -25,6 +25,7 @@ import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 /**
  * Abstract native raw decoder for all native coders to extend with.
@@ -34,36 +35,46 @@ abstract class AbstractNativeRawDecoder extends 
RawErasureDecoder {
   public static Logger LOG =
   LoggerFactory.getLogger(AbstractNativeRawDecoder.class);
 
+  // Protect ISA-L coder data structure in native layer from being accessed and
+  // updated concurrently by the init, release and decode functions.
+  protected final ReentrantReadWriteLock decoderLock =
+  new ReentrantReadWriteLock();
+
   public AbstractNativeRawDecoder(ErasureCoderOptions coderOptions) {
 super(coderOptions);
   }
 
   @Override
-  protected synchronized void doDecode(ByteBufferDecodingState decodingState)
+  protected void doDecode(ByteBufferDecodingState decodingState)
   throws IOException {
-if (nativeCoder == 0) {
-  throw new IOException(String.format("%s closed",
-  getClass().getSimpleName()));
-}
-int[] inputOffsets = new int[decodingState.inputs.length];
-int[] outputOffsets = new int[decodingState.outputs.length];
+decoderLock.readLock().lock();
+try {
+  if (nativeCoder == 0) {
+throw new IOException(String.format("%s closed",
+getClass().getSimpleName()));
+  }
+  int[] inputOffsets = new int[decodingState.inputs.length];
+  int[] outputOffsets = new int[decodingState.outputs.length];
 
-ByteBuffer buffer;
-for (int i = 0; i < decodingState.inputs.length; ++i) {
-  buffer = decodingState.inputs[i];
-  if (buffer != null) {
-inputOffsets[i] = buffer.position();
+  ByteBuffer buffer;
+  for (int i = 0; i < decodingState.inputs.length; ++i) {
+buffer = decodingState.inputs[i];
+if (buffer != null) {
+  inputOffsets[i] = buffer.position();
+}
   }
-}
 
-for (int i = 0; i < decodingState.outputs.length; ++i) {
-  buffer = decodingState.outputs[i];
-  outputOffsets[i] = buffer.position();
-}
+  for (int i = 0; i < decodingState.outputs.length; ++i) {
+buffer = decodingState.outputs[i];
+outputOffsets[i] = buffer.position();
+  }
 
-performDecodeImpl(decodingState.inputs, inputOffsets,
-decodingState.decodeLength, decodingState.erasedIndexes,
-decodingState.outputs, outputOffsets);
+  performDecodeImpl(decodingState.inputs, inputOffsets,
+  decodingState.decodeLength, decodingState.erasedIndexes,
+  decodingState.outputs, outputOffsets);
+} finally {
+  decoderLock.readLock().unlock();
+}
   }
 
   protected abstract void performDecodeImpl(ByteBuffer[] inputs,

http://git-wip-us.apache.org/repos

hadoop git commit: HADOOP-15499. Performance severe drops when running RawErasureCoderBenchmark with NativeRSRawErasureCoder. Contributed by Sammi Chen.

2018-06-10 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 7c144e4ae -> b87411027


HADOOP-15499. Performance severe drops when running RawErasureCoderBenchmark 
with NativeRSRawErasureCoder. Contributed by Sammi Chen.

(cherry picked from commit 18201b882a38ad875358c5d23c09b0ef903c2f91)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b8741102
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b8741102
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b8741102

Branch: refs/heads/branch-3.0
Commit: b8741102758f70e79eb4043b71433560f5ca713e
Parents: 7c144e4
Author: Sammi Chen 
Authored: Mon Jun 11 13:53:37 2018 +0800
Committer: Sammi Chen 
Committed: Mon Jun 11 13:58:57 2018 +0800

--
 .../rawcoder/AbstractNativeRawDecoder.java  | 51 
 .../rawcoder/AbstractNativeRawEncoder.java  | 49 +++
 .../rawcoder/NativeRSRawDecoder.java| 19 ++--
 .../rawcoder/NativeRSRawEncoder.java| 19 ++--
 .../rawcoder/NativeXORRawDecoder.java   | 19 ++--
 .../rawcoder/NativeXORRawEncoder.java   | 19 ++--
 .../rawcoder/RawErasureCoderBenchmark.java  |  6 +++
 7 files changed, 127 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8741102/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
index e845747..cb71a80 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
@@ -25,6 +25,7 @@ import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 /**
  * Abstract native raw decoder for all native coders to extend with.
@@ -34,36 +35,46 @@ abstract class AbstractNativeRawDecoder extends 
RawErasureDecoder {
   public static Logger LOG =
   LoggerFactory.getLogger(AbstractNativeRawDecoder.class);
 
+  // Protect ISA-L coder data structure in native layer from being accessed and
+  // updated concurrently by the init, release and decode functions.
+  protected final ReentrantReadWriteLock decoderLock =
+  new ReentrantReadWriteLock();
+
   public AbstractNativeRawDecoder(ErasureCoderOptions coderOptions) {
 super(coderOptions);
   }
 
   @Override
-  protected synchronized void doDecode(ByteBufferDecodingState decodingState)
+  protected void doDecode(ByteBufferDecodingState decodingState)
   throws IOException {
-if (nativeCoder == 0) {
-  throw new IOException(String.format("%s closed",
-  getClass().getSimpleName()));
-}
-int[] inputOffsets = new int[decodingState.inputs.length];
-int[] outputOffsets = new int[decodingState.outputs.length];
+decoderLock.readLock().lock();
+try {
+  if (nativeCoder == 0) {
+throw new IOException(String.format("%s closed",
+getClass().getSimpleName()));
+  }
+  int[] inputOffsets = new int[decodingState.inputs.length];
+  int[] outputOffsets = new int[decodingState.outputs.length];
 
-ByteBuffer buffer;
-for (int i = 0; i < decodingState.inputs.length; ++i) {
-  buffer = decodingState.inputs[i];
-  if (buffer != null) {
-inputOffsets[i] = buffer.position();
+  ByteBuffer buffer;
+  for (int i = 0; i < decodingState.inputs.length; ++i) {
+buffer = decodingState.inputs[i];
+if (buffer != null) {
+  inputOffsets[i] = buffer.position();
+}
   }
-}
 
-for (int i = 0; i < decodingState.outputs.length; ++i) {
-  buffer = decodingState.outputs[i];
-  outputOffsets[i] = buffer.position();
-}
+  for (int i = 0; i < decodingState.outputs.length; ++i) {
+buffer = decodingState.outputs[i];
+outputOffsets[i] = buffer.position();
+  }
 
-performDecodeImpl(decodingState.inputs, inputOffsets,
-decodingState.decodeLength, decodingState.erasedIndexes,
-decodingState.outputs, outputOffsets);
+  performDecodeImpl(decodingState.inputs, inputOffsets,
+  decodingState.decodeLength, decodingState.erasedIndexes,
+  decodingState.outputs, outputOffsets);
+} finally {
+  decoderLock.readLock().unlock();
+}
   }
 
   protected abstract

hadoop git commit: HADOOP-15499. Performance severe drops when running RawErasureCoderBenchmark with NativeRSRawErasureCoder. Contributed by Sammi Chen.

2018-06-10 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 c0d46a84a -> e3c96354a


HADOOP-15499. Performance severe drops when running RawErasureCoderBenchmark 
with NativeRSRawErasureCoder. Contributed by Sammi Chen.

(cherry picked from commit 18201b882a38ad875358c5d23c09b0ef903c2f91)
(cherry picked from commit b8741102758f70e79eb4043b71433560f5ca713e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e3c96354
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e3c96354
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e3c96354

Branch: refs/heads/branch-3.1
Commit: e3c96354a749f50038c7604fcc3fb23ecf262add
Parents: c0d46a8
Author: Sammi Chen 
Authored: Mon Jun 11 13:53:37 2018 +0800
Committer: Sammi Chen 
Committed: Mon Jun 11 14:03:39 2018 +0800

--
 .../rawcoder/AbstractNativeRawDecoder.java  | 51 
 .../rawcoder/AbstractNativeRawEncoder.java  | 49 +++
 .../rawcoder/NativeRSRawDecoder.java| 19 ++--
 .../rawcoder/NativeRSRawEncoder.java| 19 ++--
 .../rawcoder/NativeXORRawDecoder.java   | 19 ++--
 .../rawcoder/NativeXORRawEncoder.java   | 19 ++--
 .../rawcoder/RawErasureCoderBenchmark.java  |  6 +++
 7 files changed, 127 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e3c96354/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
index e845747..cb71a80 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractNativeRawDecoder.java
@@ -25,6 +25,7 @@ import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 /**
  * Abstract native raw decoder for all native coders to extend with.
@@ -34,36 +35,46 @@ abstract class AbstractNativeRawDecoder extends 
RawErasureDecoder {
   public static Logger LOG =
   LoggerFactory.getLogger(AbstractNativeRawDecoder.class);
 
+  // Protect ISA-L coder data structure in native layer from being accessed and
+  // updated concurrently by the init, release and decode functions.
+  protected final ReentrantReadWriteLock decoderLock =
+  new ReentrantReadWriteLock();
+
   public AbstractNativeRawDecoder(ErasureCoderOptions coderOptions) {
 super(coderOptions);
   }
 
   @Override
-  protected synchronized void doDecode(ByteBufferDecodingState decodingState)
+  protected void doDecode(ByteBufferDecodingState decodingState)
   throws IOException {
-if (nativeCoder == 0) {
-  throw new IOException(String.format("%s closed",
-  getClass().getSimpleName()));
-}
-int[] inputOffsets = new int[decodingState.inputs.length];
-int[] outputOffsets = new int[decodingState.outputs.length];
+decoderLock.readLock().lock();
+try {
+  if (nativeCoder == 0) {
+throw new IOException(String.format("%s closed",
+getClass().getSimpleName()));
+  }
+  int[] inputOffsets = new int[decodingState.inputs.length];
+  int[] outputOffsets = new int[decodingState.outputs.length];
 
-ByteBuffer buffer;
-for (int i = 0; i < decodingState.inputs.length; ++i) {
-  buffer = decodingState.inputs[i];
-  if (buffer != null) {
-inputOffsets[i] = buffer.position();
+  ByteBuffer buffer;
+  for (int i = 0; i < decodingState.inputs.length; ++i) {
+buffer = decodingState.inputs[i];
+if (buffer != null) {
+  inputOffsets[i] = buffer.position();
+}
   }
-}
 
-for (int i = 0; i < decodingState.outputs.length; ++i) {
-  buffer = decodingState.outputs[i];
-  outputOffsets[i] = buffer.position();
-}
+  for (int i = 0; i < decodingState.outputs.length; ++i) {
+buffer = decodingState.outputs[i];
+outputOffsets[i] = buffer.position();
+  }
 
-performDecodeImpl(decodingState.inputs, inputOffsets,
-decodingState.decodeLength, decodingState.erasedIndexes,
-decodingState.outputs, outputOffsets);
+  performDecodeImpl(decodingState.inputs, inputOffsets,
+  decodingState.decodeLength, decodingState.erasedIndexes,
+  decodingState.outputs, outputOffsets);
+} finally {
+ 

[hadoop] Git Push Summary

2018-05-07 Thread sammichen
Repository: hadoop
Updated Tags:  refs/tags/rel/release-2.9.1 [created] 73be3652f

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



svn commit: r1831552 - /hadoop/common/site/main/author/src/documentation/content/xdocs/site.xml

2018-05-14 Thread sammichen
Author: sammichen
Date: Mon May 14 08:41:40 2018
New Revision: 1831552

URL: http://svn.apache.org/viewvc?rev=1831552&view=rev
Log:
No real content is changed. Commit to test if Sammi has the permission on the 
file. 

Modified:
hadoop/common/site/main/author/src/documentation/content/xdocs/site.xml

Modified: 
hadoop/common/site/main/author/src/documentation/content/xdocs/site.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/author/src/documentation/content/xdocs/site.xml?rev=1831552&r1=1831551&r2=1831552&view=diff
==
--- hadoop/common/site/main/author/src/documentation/content/xdocs/site.xml 
(original)
+++ hadoop/common/site/main/author/src/documentation/content/xdocs/site.xml Mon 
May 14 08:41:40 2018
@@ -87,9 +87,9 @@
 
 http://hadoop.apache.org/docs/";>
   
-  
+ 
   
-  
+
   
   
   
@@ -100,5 +100,6 @@
 
 
   
+  
  
 



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



svn commit: r1831628 - in /hadoop/common/site/main: author/src/documentation/content/xdocs/ publish/ publish/docs/r2.9.1/ publish/docs/r2.9.1/api/ publish/docs/r2.9.1/api/org/ publish/docs/r2.9.1/api/

2018-05-15 Thread sammichen
Author: sammichen
Date: Tue May 15 11:42:03 2018
New Revision: 1831628

URL: http://svn.apache.org/viewvc?rev=1831628&view=rev
Log:
Update site for Hadoop 2.9.1 release


[This commit notification would consist of 3967 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



svn commit: r1831634 - in /hadoop/common/site/main/publish/docs: current current2 stable2

2018-05-15 Thread sammichen
Author: sammichen
Date: Tue May 15 13:19:00 2018
New Revision: 1831634

URL: http://svn.apache.org/viewvc?rev=1831634&view=rev
Log:
Update current and stable link for Hadoop 2.9.1 release

Modified:
hadoop/common/site/main/publish/docs/current
hadoop/common/site/main/publish/docs/current2
hadoop/common/site/main/publish/docs/stable2

Modified: hadoop/common/site/main/publish/docs/current
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/docs/current?rev=1831634&r1=1831633&r2=1831634&view=diff
==
--- hadoop/common/site/main/publish/docs/current (original)
+++ hadoop/common/site/main/publish/docs/current Tue May 15 13:19:00 2018
@@ -1 +1 @@
-link current3
\ No newline at end of file
+link current2
\ No newline at end of file

Modified: hadoop/common/site/main/publish/docs/current2
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/docs/current2?rev=1831634&r1=1831633&r2=1831634&view=diff
==
--- hadoop/common/site/main/publish/docs/current2 (original)
+++ hadoop/common/site/main/publish/docs/current2 Tue May 15 13:19:00 2018
@@ -1 +1 @@
-link r2.9.0
\ No newline at end of file
+link r2.9.1
\ No newline at end of file

Modified: hadoop/common/site/main/publish/docs/stable2
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/docs/stable2?rev=1831634&r1=1831633&r2=1831634&view=diff
==
--- hadoop/common/site/main/publish/docs/stable2 (original)
+++ hadoop/common/site/main/publish/docs/stable2 Tue May 15 13:19:00 2018
@@ -1 +1 @@
-link r2.9.0
\ No newline at end of file
+link r2.9.1
\ No newline at end of file



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



svn commit: r1831639 - in /hadoop/common/site/main: author/src/documentation/content/xdocs/ publish/ publish/docs/r2.9.1/

2018-05-15 Thread sammichen
Author: sammichen
Date: Tue May 15 15:36:00 2018
New Revision: 1831639

URL: http://svn.apache.org/viewvc?rev=1831639&view=rev
Log:
Refine index and release page for Hadoop 2.9.1 release

Modified:
hadoop/common/site/main/author/src/documentation/content/xdocs/index.xml
hadoop/common/site/main/author/src/documentation/content/xdocs/releases.xml
hadoop/common/site/main/publish/docs/r2.9.1/index.html
hadoop/common/site/main/publish/index.html
hadoop/common/site/main/publish/index.pdf
hadoop/common/site/main/publish/releases.html
hadoop/common/site/main/publish/releases.pdf

Modified: 
hadoop/common/site/main/author/src/documentation/content/xdocs/index.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/author/src/documentation/content/xdocs/index.xml?rev=1831639&r1=1831638&r2=1831639&view=diff
==
--- hadoop/common/site/main/author/src/documentation/content/xdocs/index.xml 
(original)
+++ hadoop/common/site/main/author/src/documentation/content/xdocs/index.xml 
Tue May 15 15:36:00 2018
@@ -140,10 +140,10 @@
 
   
 3 May 2018: Release 2.9.1 available 
-This is the next release of Hadoop 2.9 line. It contains 208 bug 
fixes, improvements and enhancements since 2.9.0. 
+This is the next release of Apache Hadoop 2.9 line. It contains 208 
bug fixes, improvements and enhancements since 2.9.0. 
 
-   Users are encouraged to read the http://hadoop.apache.org/docs/r2.9.1/index.html";>overview of major 
changes since 2.9.0.
-   For details of 208 fixes, improvements, and other enhancements 
since the 2.9.0 release, please check:
+   Users are encouraged to read the http://hadoop.apache.org/docs/r2.9.1/index.html";>overview of major 
changes for major features and improvements for Apache Hadoop 2.9.
+   For details of 208 fixes, improvements, and other enhancements 
since the 2.9.0 release, please check
 http://hadoop.apache.org/docs/r2.9.1/hadoop-project-dist/hadoop-common/release/2.9.1/RELEASENOTES.2.9.1.html";>release
 notes and
 http://hadoop.apache.org/docs/r2.9.1/hadoop-project-dist/hadoop-common/release/2.9.1/CHANGES.2.9.1.html";>changelog.
 

Modified: 
hadoop/common/site/main/author/src/documentation/content/xdocs/releases.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/author/src/documentation/content/xdocs/releases.xml?rev=1831639&r1=1831638&r2=1831639&view=diff
==
--- hadoop/common/site/main/author/src/documentation/content/xdocs/releases.xml 
(original)
+++ hadoop/common/site/main/author/src/documentation/content/xdocs/releases.xml 
Tue May 15 15:36:00 2018
@@ -173,11 +173,11 @@
 
   
 3 May 2018: Release 2.9.1 available 
- Apache Hadoop 2.9.1 is the next release of Hadoop 2.9 line. It 
contains 208 bug fixes, improvements and enhancements since 2.9.0. For major 
features and improvements for Apache Hadoop 2.9.1, please refer:
+This is the next release of Apache Hadoop 2.9 line. It contains 208 
bug fixes, improvements and enhancements since 2.9.0. For major features and 
improvements for Apache Hadoop 2.9, please refer to
 http://hadoop.apache.org/docs/r2.9.1/index.html";>overview of major 
changes.
-For details of 208 fixes, improvements, and other enhancements 
since the 2.9.0 release, please check:
+For details of 208 fixes, improvements, and other enhancements 
since the 2.9.0 release, please check
 http://hadoop.apache.org/docs/r2.9.1/hadoop-project-dist/hadoop-common/release/2.9.1/RELEASENOTES.2.9.1.html";>release
 notes and
-http://hadoop.apache.org/docs/r2.9.1/hadoop-project-dist/hadoop-common/release/2.9.1/CHANGES.2.9.1.html";>changelog
+http://hadoop.apache.org/docs/r2.9.1/hadoop-project-dist/hadoop-common/release/2.9.1/CHANGES.2.9.1.html";>changelog.
 
   
 

Modified: hadoop/common/site/main/publish/docs/r2.9.1/index.html
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/docs/r2.9.1/index.html?rev=1831639&r1=1831638&r2=1831639&view=diff
==
--- hadoop/common/site/main/publish/docs/r2.9.1/index.html (original)
+++ hadoop/common/site/main/publish/docs/r2.9.1/index.html Tue May 15 15:36:00 
2018
@@ -428,7 +428,7 @@
   See the License for the specific language governing permissions and
   limitations under the License. See accompanying LICENSE file.
 -->Apache Hadoop 2.9.1
-Apache Hadoop 2.9.1 is a minor release in the 2.x.y release line, building 
upon the previous stable release 2.4.1.
+Apache Hadoop 2.9.1 is a point release in the 2.x.y release 

hadoop git commit: HDFS-13540. DFSStripedInputStream should only allocate new buffers when reading. Contributed by Xiao Chen.

2018-05-23 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk fed2bef64 -> 34e8b9f9a


HDFS-13540. DFSStripedInputStream should only allocate new buffers when 
reading. Contributed by Xiao Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/34e8b9f9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/34e8b9f9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/34e8b9f9

Branch: refs/heads/trunk
Commit: 34e8b9f9a86fb03156861482643fba11bdee1dd4
Parents: fed2bef
Author: Sammi Chen 
Authored: Wed May 23 19:10:09 2018 +0800
Committer: Sammi Chen 
Committed: Wed May 23 19:10:09 2018 +0800

--
 .../apache/hadoop/io/ElasticByteBufferPool.java | 12 ++
 .../hadoop/hdfs/DFSStripedInputStream.java  | 12 +++---
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 45 
 3 files changed, 64 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/34e8b9f9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
index 023f37f..9dd7771 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
@@ -116,4 +116,16 @@ public final class ElasticByteBufferPool implements 
ByteBufferPool {
   // poor granularity.
 }
   }
+
+  /**
+   * Get the size of the buffer pool, for the specified buffer type.
+   *
+   * @param direct Whether the size is returned for direct buffers
+   * @return The size
+   */
+  @InterfaceAudience.Private
+  @InterfaceStability.Unstable
+  public int size(boolean direct) {
+return getBufferTree(direct).size();
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/34e8b9f9/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index f3b16e0..5557a50 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -116,12 +116,14 @@ public class DFSStripedInputStream extends DFSInputStream 
{
 return decoder.preferDirectBuffer();
   }
 
-  void resetCurStripeBuffer() {
-if (curStripeBuf == null) {
+  private void resetCurStripeBuffer(boolean shouldAllocateBuf) {
+if (shouldAllocateBuf && curStripeBuf == null) {
   curStripeBuf = BUFFER_POOL.getBuffer(useDirectBuffer(),
   cellSize * dataBlkNum);
 }
-curStripeBuf.clear();
+if (curStripeBuf != null) {
+  curStripeBuf.clear();
+}
 curStripeRange = new StripeRange(0, 0);
   }
 
@@ -206,7 +208,7 @@ public class DFSStripedInputStream extends DFSInputStream {
*/
   @Override
   protected void closeCurrentBlockReaders() {
-resetCurStripeBuffer();
+resetCurStripeBuffer(false);
 if (blockReaders ==  null || blockReaders.length == 0) {
   return;
 }
@@ -296,7 +298,7 @@ public class DFSStripedInputStream extends DFSInputStream {
*/
   private void readOneStripe(CorruptedBlocks corruptedBlocks)
   throws IOException {
-resetCurStripeBuffer();
+resetCurStripeBuffer(true);
 
 // compute stripe range based on pos
 final long offsetInBlockGroup = getOffsetInBlockGroup();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/34e8b9f9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
index cdebee0..422746e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
@@ -30,6 +30,7 @@ import org.apache.hadoop.hdfs.server.datanode.DataNode;
 import org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils;
 import org.apache.hadoop.hdfs.serv

hadoop git commit: HDFS-13540. DFSStripedInputStream should only allocate new buffers when reading. Contributed by Xiao Chen.

2018-05-23 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 fc4c20fc3 -> 7d71b3a1c


HDFS-13540. DFSStripedInputStream should only allocate new buffers when 
reading. Contributed by Xiao Chen.

(cherry picked from commit 34e8b9f9a86fb03156861482643fba11bdee1dd4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7d71b3a1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7d71b3a1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7d71b3a1

Branch: refs/heads/branch-3.1
Commit: 7d71b3a1cc0f2572d8643bc2faeb878dfe028b8b
Parents: fc4c20f
Author: Sammi Chen 
Authored: Wed May 23 19:10:09 2018 +0800
Committer: Sammi Chen 
Committed: Wed May 23 19:12:04 2018 +0800

--
 .../apache/hadoop/io/ElasticByteBufferPool.java | 12 ++
 .../hadoop/hdfs/DFSStripedInputStream.java  | 12 +++---
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 45 
 3 files changed, 64 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d71b3a1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
index 023f37f..9dd7771 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
@@ -116,4 +116,16 @@ public final class ElasticByteBufferPool implements 
ByteBufferPool {
   // poor granularity.
 }
   }
+
+  /**
+   * Get the size of the buffer pool, for the specified buffer type.
+   *
+   * @param direct Whether the size is returned for direct buffers
+   * @return The size
+   */
+  @InterfaceAudience.Private
+  @InterfaceStability.Unstable
+  public int size(boolean direct) {
+return getBufferTree(direct).size();
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d71b3a1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index c047b97..190ba8e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -114,12 +114,14 @@ public class DFSStripedInputStream extends DFSInputStream 
{
 return decoder.preferDirectBuffer();
   }
 
-  void resetCurStripeBuffer() {
-if (curStripeBuf == null) {
+  private void resetCurStripeBuffer(boolean shouldAllocateBuf) {
+if (shouldAllocateBuf && curStripeBuf == null) {
   curStripeBuf = BUFFER_POOL.getBuffer(useDirectBuffer(),
   cellSize * dataBlkNum);
 }
-curStripeBuf.clear();
+if (curStripeBuf != null) {
+  curStripeBuf.clear();
+}
 curStripeRange = new StripeRange(0, 0);
   }
 
@@ -204,7 +206,7 @@ public class DFSStripedInputStream extends DFSInputStream {
*/
   @Override
   protected void closeCurrentBlockReaders() {
-resetCurStripeBuffer();
+resetCurStripeBuffer(false);
 if (blockReaders ==  null || blockReaders.length == 0) {
   return;
 }
@@ -294,7 +296,7 @@ public class DFSStripedInputStream extends DFSInputStream {
*/
   private void readOneStripe(CorruptedBlocks corruptedBlocks)
   throws IOException {
-resetCurStripeBuffer();
+resetCurStripeBuffer(true);
 
 // compute stripe range based on pos
 final long offsetInBlockGroup = getOffsetInBlockGroup();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d71b3a1/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
index cdebee0..422746e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
@@ -30,6 +30,7 @@ import org.apache.hadoop.hdfs.server.datanode.DataNode;
 import org.apache.ha

hadoop git commit: HDFS-13540. DFSStripedInputStream should only allocate new buffers when reading. Contributed by Xiao Chen.

2018-05-23 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 bd98d4e77 -> 6db710b9d


HDFS-13540. DFSStripedInputStream should only allocate new buffers when 
reading. Contributed by Xiao Chen.

(cherry picked from commit 34e8b9f9a86fb03156861482643fba11bdee1dd4)
(cherry picked from commit 7d71b3a1cc0f2572d8643bc2faeb878dfe028b8b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6db710b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6db710b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6db710b9

Branch: refs/heads/branch-3.0
Commit: 6db710b9d800a98d801b241cef4db64fe0993447
Parents: bd98d4e
Author: Sammi Chen 
Authored: Wed May 23 19:10:09 2018 +0800
Committer: Sammi Chen 
Committed: Wed May 23 19:20:32 2018 +0800

--
 .../apache/hadoop/io/ElasticByteBufferPool.java | 12 ++
 .../hadoop/hdfs/DFSStripedInputStream.java  | 12 +++---
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 45 
 3 files changed, 64 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6db710b9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
index 023f37f..9dd7771 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
@@ -116,4 +116,16 @@ public final class ElasticByteBufferPool implements 
ByteBufferPool {
   // poor granularity.
 }
   }
+
+  /**
+   * Get the size of the buffer pool, for the specified buffer type.
+   *
+   * @param direct Whether the size is returned for direct buffers
+   * @return The size
+   */
+  @InterfaceAudience.Private
+  @InterfaceStability.Unstable
+  public int size(boolean direct) {
+return getBufferTree(direct).size();
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6db710b9/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
index c047b97..190ba8e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
@@ -114,12 +114,14 @@ public class DFSStripedInputStream extends DFSInputStream 
{
 return decoder.preferDirectBuffer();
   }
 
-  void resetCurStripeBuffer() {
-if (curStripeBuf == null) {
+  private void resetCurStripeBuffer(boolean shouldAllocateBuf) {
+if (shouldAllocateBuf && curStripeBuf == null) {
   curStripeBuf = BUFFER_POOL.getBuffer(useDirectBuffer(),
   cellSize * dataBlkNum);
 }
-curStripeBuf.clear();
+if (curStripeBuf != null) {
+  curStripeBuf.clear();
+}
 curStripeRange = new StripeRange(0, 0);
   }
 
@@ -204,7 +206,7 @@ public class DFSStripedInputStream extends DFSInputStream {
*/
   @Override
   protected void closeCurrentBlockReaders() {
-resetCurStripeBuffer();
+resetCurStripeBuffer(false);
 if (blockReaders ==  null || blockReaders.length == 0) {
   return;
 }
@@ -294,7 +296,7 @@ public class DFSStripedInputStream extends DFSInputStream {
*/
   private void readOneStripe(CorruptedBlocks corruptedBlocks)
   throws IOException {
-resetCurStripeBuffer();
+resetCurStripeBuffer(true);
 
 // compute stripe range based on pos
 final long offsetInBlockGroup = getOffsetInBlockGroup();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6db710b9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
index cdebee0..422746e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
@@ -30,6 +30,7 @@ import o

hadoop git commit: HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading mechanism. Contributed by Genmao Yu.

2018-04-09 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 f667ef1f6 -> 1d78825bf


HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading 
mechanism. Contributed by Genmao Yu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1d78825b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1d78825b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1d78825b

Branch: refs/heads/branch-2
Commit: 1d78825bf1eb9ea6ce2cdc240b2baae1701d8423
Parents: f667ef1
Author: Sammi Chen 
Authored: Tue Apr 10 14:24:31 2018 +0800
Committer: Sammi Chen 
Committed: Tue Apr 10 14:24:31 2018 +0800

--
 .../aliyun/oss/AliyunCredentialsProvider.java   |   3 +-
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  | 213 +++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  28 ++-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 167 ---
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 --
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 115 +++---
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  23 +-
 .../oss/TestAliyunOSSBlockOutputStream.java | 115 ++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  10 +-
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 
 .../contract/TestAliyunOSSContractDistCp.java   |   2 +-
 11 files changed, 546 insertions(+), 332 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d78825b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
index b46c67a..58c14a9 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
@@ -35,8 +35,7 @@ import static org.apache.hadoop.fs.aliyun.oss.Constants.*;
 public class AliyunCredentialsProvider implements CredentialsProvider {
   private Credentials credentials = null;
 
-  public AliyunCredentialsProvider(Configuration conf)
-  throws IOException {
+  public AliyunCredentialsProvider(Configuration conf) throws IOException {
 String accessKeyId;
 String accessKeySecret;
 String securityToken;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d78825b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
new file mode 100644
index 000..2d9a13b
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -0,0 +1,213 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import com.aliyun.oss.model.PartETag;
+import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.common.util.concurrent.ListeningExecutorService;
+import com.google.common.util.concurrent.MoreExecutors;
+import org.apache.hadoop.conf.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.BufferedOutputStream;
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+
+/**
+ * Asynchrono

[1/2] hadoop git commit: HDFS-11915. Sync rbw dir on the first hsync() to avoid file lost on power failure. Contributed by Vinayakumar B.

2018-04-10 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9.1 91bb336d2 -> a7de3cfa7


HDFS-11915. Sync rbw dir on the first hsync() to avoid file lost on power 
failure. Contributed by Vinayakumar B.

(cherry picked from commit 2273499aef18ac2c7ffc435a61db8cea591e8b1f)
(cherry picked from commit f24d3b69b403f3a2c5af6b9c74a643fb9f4492e5)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b42f02ca
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b42f02ca
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b42f02ca

Branch: refs/heads/branch-2.9.1
Commit: b42f02ca0c011f5998a12dbbc22e2674a22d
Parents: 91bb336
Author: Wei-Chiu Chuang 
Authored: Fri Jan 12 10:00:00 2018 -0800
Committer: Sammi Chen 
Committed: Tue Apr 10 11:41:48 2018 +0800

--
 .../hdfs/server/datanode/BlockReceiver.java   |  9 +
 .../hadoop/hdfs/server/datanode/DatanodeUtil.java | 18 ++
 .../datanode/fsdataset/impl/FsDatasetImpl.java| 15 ++-
 3 files changed, 29 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b42f02ca/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
index c8a33ca..7f381b1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
@@ -24,6 +24,7 @@ import java.io.Closeable;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.EOFException;
+import java.io.File;
 import java.io.IOException;
 import java.io.OutputStreamWriter;
 import java.io.Writer;
@@ -127,6 +128,7 @@ class BlockReceiver implements Closeable {
 
   private boolean syncOnClose;
   private volatile boolean dirSyncOnFinalize;
+  private boolean dirSyncOnHSyncDone = false;
   private long restartBudget;
   /** the reference of the volume where the block receiver writes to */
   private ReplicaHandler replicaHandler;
@@ -421,6 +423,13 @@ class BlockReceiver implements Closeable {
   }
   flushTotalNanos += flushEndNanos - flushStartNanos;
 }
+if (isSync && !dirSyncOnHSyncDone && replicaInfo instanceof ReplicaInfo) {
+  ReplicaInfo rInfo = (ReplicaInfo) replicaInfo;
+  File baseDir = rInfo.getBlockFile().getParentFile();
+  FileIoProvider fileIoProvider = datanode.getFileIoProvider();
+  DatanodeUtil.fsyncDirectory(fileIoProvider, rInfo.getVolume(), baseDir);
+  dirSyncOnHSyncDone = true;
+}
 if (checksumOut != null || streams.getDataOut() != null) {
   datanode.metrics.addFlushNanos(flushTotalNanos);
   if (isSync) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b42f02ca/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java
index c98ff54..e29a5ed 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java
@@ -142,4 +142,22 @@ public class DatanodeUtil {
 }
 return (FileInputStream)lin.getWrappedStream();
   }
+
+  /**
+   * Call fsync on specified directories to sync metadata changes.
+   * @param fileIoProvider
+   * @param volume
+   * @param dirs
+   * @throws IOException
+   */
+  public static void fsyncDirectory(FileIoProvider fileIoProvider,
+  FsVolumeSpi volume, File... dirs) throws IOException {
+for (File dir : dirs) {
+  try {
+fileIoProvider.dirSync(volume, dir);
+  } catch (IOException e) {
+throw new IOException("Failed to sync " + dir, e);
+  }
+}
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b42f02ca/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hd

[2/2] hadoop git commit: HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading mechanism. Contributed by Genmao Yu.

2018-04-10 Thread sammichen
HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading 
mechanism. Contributed by Genmao Yu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a7de3cfa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a7de3cfa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a7de3cfa

Branch: refs/heads/branch-2.9.1
Commit: a7de3cfa712087b3a8476f9ad83c3b1118fa5394
Parents: b42f02c
Author: Sammi Chen 
Authored: Tue Apr 10 16:45:53 2018 +0800
Committer: Sammi Chen 
Committed: Tue Apr 10 16:45:53 2018 +0800

--
 .../aliyun/oss/AliyunCredentialsProvider.java   |   3 +-
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  | 213 +++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  28 ++-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 167 ---
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 --
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 117 +++---
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  23 +-
 .../oss/TestAliyunOSSBlockOutputStream.java | 115 ++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  10 +-
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 
 .../contract/TestAliyunOSSContractDistCp.java   |   2 +-
 11 files changed, 547 insertions(+), 333 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a7de3cfa/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
index b46c67a..58c14a9 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
@@ -35,8 +35,7 @@ import static org.apache.hadoop.fs.aliyun.oss.Constants.*;
 public class AliyunCredentialsProvider implements CredentialsProvider {
   private Credentials credentials = null;
 
-  public AliyunCredentialsProvider(Configuration conf)
-  throws IOException {
+  public AliyunCredentialsProvider(Configuration conf) throws IOException {
 String accessKeyId;
 String accessKeySecret;
 String securityToken;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a7de3cfa/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
new file mode 100644
index 000..2d9a13b
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -0,0 +1,213 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import com.aliyun.oss.model.PartETag;
+import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.common.util.concurrent.ListeningExecutorService;
+import com.google.common.util.concurrent.MoreExecutors;
+import org.apache.hadoop.conf.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.BufferedOutputStream;
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+
+/**
+ * Asynchronous multi-part based uploading mechanism to support huge file
+ * which is larger

hadoop git commit: HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading mechanism. Contributed by Genmao Yu.

2018-04-12 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 41e0999b3 -> 79962d946


HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading 
mechanism. Contributed by Genmao Yu.

(cherry picked from commit a7de3cfa712087b3a8476f9ad83c3b1118fa5394)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/79962d94
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/79962d94
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/79962d94

Branch: refs/heads/branch-2.9
Commit: 79962d946eb4090b1df543086d8a379ac270aa22
Parents: 41e0999
Author: Sammi Chen 
Authored: Tue Apr 10 16:45:53 2018 +0800
Committer: Sammi Chen 
Committed: Thu Apr 12 19:03:43 2018 +0800

--
 .../aliyun/oss/AliyunCredentialsProvider.java   |   3 +-
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  | 213 +++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  28 ++-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 167 ---
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 --
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 117 +++---
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  23 +-
 .../oss/TestAliyunOSSBlockOutputStream.java | 115 ++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  10 +-
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 
 .../contract/TestAliyunOSSContractDistCp.java   |   2 +-
 11 files changed, 547 insertions(+), 333 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/79962d94/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
index b46c67a..58c14a9 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
@@ -35,8 +35,7 @@ import static org.apache.hadoop.fs.aliyun.oss.Constants.*;
 public class AliyunCredentialsProvider implements CredentialsProvider {
   private Credentials credentials = null;
 
-  public AliyunCredentialsProvider(Configuration conf)
-  throws IOException {
+  public AliyunCredentialsProvider(Configuration conf) throws IOException {
 String accessKeyId;
 String accessKeySecret;
 String securityToken;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/79962d94/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
new file mode 100644
index 000..2d9a13b
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -0,0 +1,213 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import com.aliyun.oss.model.PartETag;
+import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.common.util.concurrent.ListeningExecutorService;
+import com.google.common.util.concurrent.MoreExecutors;
+import org.apache.hadoop.conf.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.BufferedOutputStream;
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionExce

hadoop git commit: HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading mechanism. Contributed by Genmao Yu.

2018-04-12 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 d416a0c9b -> ce7ebbe2c


HADOOP-14999. AliyunOSS: provide one asynchronous multi-part based uploading 
mechanism. Contributed by Genmao Yu.

(cherry picked from commit 6542d17ea460ec222137c4b275b13daf15d3fca3)
(cherry picked from commit e96c7bf82de1e9fd97df5fb6b763e211ebad5913)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ce7ebbe2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ce7ebbe2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ce7ebbe2

Branch: refs/heads/branch-3.0
Commit: ce7ebbe2ccfb0d054e752f811ba6f1ba4ac360a4
Parents: d416a0c
Author: Sammi Chen 
Authored: Fri Mar 30 20:23:05 2018 +0800
Committer: Sammi Chen 
Committed: Fri Apr 13 10:17:46 2018 +0800

--
 .../aliyun/oss/AliyunCredentialsProvider.java   |   3 +-
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  | 206 +++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  34 ++-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 173 
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 --
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 115 ---
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  22 +-
 .../oss/TestAliyunOSSBlockOutputStream.java | 115 +++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  10 +-
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 
 .../contract/TestAliyunOSSContractDistCp.java   |   2 +-
 11 files changed, 544 insertions(+), 338 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce7ebbe2/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
index b46c67a..58c14a9 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
@@ -35,8 +35,7 @@ import static org.apache.hadoop.fs.aliyun.oss.Constants.*;
 public class AliyunCredentialsProvider implements CredentialsProvider {
   private Credentials credentials = null;
 
-  public AliyunCredentialsProvider(Configuration conf)
-  throws IOException {
+  public AliyunCredentialsProvider(Configuration conf) throws IOException {
 String accessKeyId;
 String accessKeySecret;
 String securityToken;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce7ebbe2/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
new file mode 100644
index 000..12d551b
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -0,0 +1,206 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import com.aliyun.oss.model.PartETag;
+import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.common.util.concurrent.ListeningExecutorService;
+import com.google.common.util.concurrent.MoreExecutors;
+import org.apache.hadoop.conf.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.BufferedOutputStream;
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import j

svn commit: r1829251 - /hadoop/common/site/main/publish/who.html

2018-04-16 Thread sammichen
Author: sammichen
Date: Mon Apr 16 11:01:39 2018
New Revision: 1829251

URL: http://svn.apache.org/viewvc?rev=1829251&view=rev
Log: (empty)

Modified:
hadoop/common/site/main/publish/who.html

Modified: hadoop/common/site/main/publish/who.html
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/who.html?rev=1829251&r1=1829250&r2=1829251&view=diff
==
--- hadoop/common/site/main/publish/who.html (original)
+++ hadoop/common/site/main/publish/who.html Mon Apr 16 11:01:39 2018
@@ -815,7 +815,6 @@ document.write("Last Published: " + docu
  +5.5

 
-

 
  
@@ -2118,6 +2117,15 @@ document.write("Last Published: " + docu

 
 
+
+
+sammichen
+ http://people.apache.org/~sammichen";>Sammi Chen
+ Intel
+ 
+ +8
+
+

 
  



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



svn propchange: r1829251 - svn:log

2018-04-16 Thread sammichen
Author: sammichen
Revision: 1829251
Modified property: svn:log

Modified: svn:log at Mon Apr 16 11:28:42 2018
--
--- svn:log (original)
+++ svn:log Mon Apr 16 11:28:42 2018
@@ -0,0 +1 @@
+Add "sammichen" in the committer list


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] Git Push Summary

2018-04-16 Thread sammichen
Repository: hadoop
Updated Tags:  refs/tags/release-2.9.1-RC0 [created] 05735beda

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



svn commit: r1829512 - in /hadoop/common/site/main: author/src/documentation/content/xdocs/who.xml publish/who.html publish/who.pdf

2018-04-19 Thread sammichen
Author: sammichen
Date: Thu Apr 19 07:39:08 2018
New Revision: 1829512

URL: http://svn.apache.org/viewvc?rev=1829512&view=rev
Log:
My last commit is not complete. Add "sammichen" to committer list.  

Modified:
hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml
hadoop/common/site/main/publish/who.html
hadoop/common/site/main/publish/who.pdf

Modified: hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml?rev=1829512&r1=1829511&r2=1829512&view=diff
==
--- hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml 
(original)
+++ hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml Thu 
Apr 19 07:39:08 2018
@@ -1363,6 +1363,14 @@
  -8
    
 
+  
+ sammichen
+ Sammi Chen
+ Intel
+ 
+ +8
+   
+  

  schen
  http://people.apache.org/~schen";>Scott Chun-Yang 
Chen

Modified: hadoop/common/site/main/publish/who.html
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/who.html?rev=1829512&r1=1829511&r2=1829512&view=diff
==
--- hadoop/common/site/main/publish/who.html (original)
+++ hadoop/common/site/main/publish/who.html Thu Apr 19 07:39:08 2018
@@ -2118,6 +2118,15 @@ document.write("Last Published: " + docu

 
 
+
+ 
+sammichen
+ Sammi Chen
+ Intel
+ 
+ +8
+   
+

 
  

Modified: hadoop/common/site/main/publish/who.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/who.pdf?rev=1829512&r1=1829511&r2=1829512&view=diff
==
Binary files - no diff available.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

2018-01-16 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk 41049ba5d -> 9195a6e30


HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential 
read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9195a6e3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9195a6e3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9195a6e3

Branch: refs/heads/trunk
Commit: 9195a6e302028ed3921d1016ac2fa5754f06ebf0
Parents: 41049ba
Author: Sammi Chen 
Authored: Wed Jan 17 15:55:59 2018 +0800
Committer: Sammi Chen 
Committed: Wed Jan 17 15:55:59 2018 +0800

--
 .../dev-support/findbugs-exclude.xml|   8 +
 .../fs/aliyun/oss/AliyunOSSFileReaderTask.java  | 109 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  31 +++-
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 149 +--
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  13 +-
 .../apache/hadoop/fs/aliyun/oss/ReadBuffer.java |  86 +++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  49 ++
 8 files changed, 407 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9195a6e3/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
index 40d78d0..c55f8e3 100644
--- a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
@@ -15,4 +15,12 @@
limitations under the License.
 -->
 
+
+
+
+
+
+
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9195a6e3/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
new file mode 100644
index 000..e5bfc2c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Used by {@link AliyunOSSInputStream} as an task that submitted
+ * to the thread pool.
+ * Each AliyunOSSFileReaderTask reads one part of the file so that
+ * we can accelerate the sequential read.
+ */
+public class AliyunOSSFileReaderTask implements Runnable {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(AliyunOSSFileReaderTask.class);
+
+  private String key;
+  private AliyunOSSFileSystemStore store;
+  private ReadBuffer readBuffer;
+  private static final int MAX_RETRIES = 3;
+  private RetryPolicy retryPolicy;
+
+  public AliyunOSSFileReaderTask(String key, AliyunOSSFileSystemStore store,
+  ReadBuffer readBuffer) {
+this.key = key;
+this.store = store;
+this.readBuffer = readBuffer;
+RetryPolicy defaultPolicy =
+RetryPolicies.retryUpToMaximumCountWithFixedSleep(
+MAX_RETRIES, 3, TimeUnit.SECONDS);
+Map, RetryPolicy> policies = new HashMap<>();
+policies.put(IOException.class, defaultPolicy);
+policies.put(IndexOutOfBoundsException.class,
+RetryPolicies.TRY_ONCE_THEN_FAIL);
+policies.put(NullPointerException.cl

hadoop git commit: HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

2018-01-17 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 e54c65a32 -> 55142849d


HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential 
read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

(cherry picked from commit 9195a6e302028ed3921d1016ac2fa5754f06ebf0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/55142849
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/55142849
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/55142849

Branch: refs/heads/branch-3.0
Commit: 55142849db02a9191db0dd6f4e1401ff19ec242a
Parents: e54c65a
Author: Sammi Chen 
Authored: Wed Jan 17 15:55:59 2018 +0800
Committer: Sammi Chen 
Committed: Wed Jan 17 16:12:23 2018 +0800

--
 .../dev-support/findbugs-exclude.xml|   8 +
 .../fs/aliyun/oss/AliyunOSSFileReaderTask.java  | 109 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  31 +++-
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 149 +--
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  13 +-
 .../apache/hadoop/fs/aliyun/oss/ReadBuffer.java |  86 +++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  49 ++
 8 files changed, 407 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/55142849/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
index 40d78d0..c55f8e3 100644
--- a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
@@ -15,4 +15,12 @@
limitations under the License.
 -->
 
+
+
+
+
+
+
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/55142849/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
new file mode 100644
index 000..e5bfc2c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Used by {@link AliyunOSSInputStream} as an task that submitted
+ * to the thread pool.
+ * Each AliyunOSSFileReaderTask reads one part of the file so that
+ * we can accelerate the sequential read.
+ */
+public class AliyunOSSFileReaderTask implements Runnable {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(AliyunOSSFileReaderTask.class);
+
+  private String key;
+  private AliyunOSSFileSystemStore store;
+  private ReadBuffer readBuffer;
+  private static final int MAX_RETRIES = 3;
+  private RetryPolicy retryPolicy;
+
+  public AliyunOSSFileReaderTask(String key, AliyunOSSFileSystemStore store,
+  ReadBuffer readBuffer) {
+this.key = key;
+this.store = store;
+this.readBuffer = readBuffer;
+RetryPolicy defaultPolicy =
+RetryPolicies.retryUpToMaximumCountWithFixedSleep(
+MAX_RETRIES, 3, TimeUnit.SECONDS);
+Map, RetryPolicy> policies = new HashMap<>();
+policies.put(IOException.class, defaultPolicy);
+policies.put(IndexOutOfBoundsException.class,
+

hadoop git commit: HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

2018-01-17 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3 db8345fa9 -> 082a707ba


HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential 
read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

(cherry picked from commit 9195a6e302028ed3921d1016ac2fa5754f06ebf0)
(cherry picked from commit 55142849db02a9191db0dd6f4e1401ff19ec242a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/082a707b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/082a707b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/082a707b

Branch: refs/heads/branch-3
Commit: 082a707bae4bb97444a34c00eecd62975807388d
Parents: db8345f
Author: Sammi Chen 
Authored: Wed Jan 17 15:55:59 2018 +0800
Committer: Sammi Chen 
Committed: Wed Jan 17 16:16:03 2018 +0800

--
 .../dev-support/findbugs-exclude.xml|   8 +
 .../fs/aliyun/oss/AliyunOSSFileReaderTask.java  | 109 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  31 +++-
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 149 +--
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  13 +-
 .../apache/hadoop/fs/aliyun/oss/ReadBuffer.java |  86 +++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  49 ++
 8 files changed, 407 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/082a707b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
index 40d78d0..c55f8e3 100644
--- a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
@@ -15,4 +15,12 @@
limitations under the License.
 -->
 
+
+
+
+
+
+
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/082a707b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
new file mode 100644
index 000..e5bfc2c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Used by {@link AliyunOSSInputStream} as an task that submitted
+ * to the thread pool.
+ * Each AliyunOSSFileReaderTask reads one part of the file so that
+ * we can accelerate the sequential read.
+ */
+public class AliyunOSSFileReaderTask implements Runnable {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(AliyunOSSFileReaderTask.class);
+
+  private String key;
+  private AliyunOSSFileSystemStore store;
+  private ReadBuffer readBuffer;
+  private static final int MAX_RETRIES = 3;
+  private RetryPolicy retryPolicy;
+
+  public AliyunOSSFileReaderTask(String key, AliyunOSSFileSystemStore store,
+  ReadBuffer readBuffer) {
+this.key = key;
+this.store = store;
+this.readBuffer = readBuffer;
+RetryPolicy defaultPolicy =
+RetryPolicies.retryUpToMaximumCountWithFixedSleep(
+MAX_RETRIES, 3, TimeUnit.SECONDS);
+Map, RetryPolicy> policies = new HashMap<>();
+policies.put(IOException.class, default

hadoop git commit: HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

2018-01-17 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 8e7ce0eb4 -> 896dc7c78


HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential 
read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

(cherry picked from commit 9195a6e302028ed3921d1016ac2fa5754f06ebf0)
(cherry picked from commit 55142849db02a9191db0dd6f4e1401ff19ec242a)
(cherry picked from commit 082a707bae4bb97444a34c00eecd62975807388d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/896dc7c7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/896dc7c7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/896dc7c7

Branch: refs/heads/branch-2
Commit: 896dc7c7801adaa4460fa6c19a4f452a6a6112d8
Parents: 8e7ce0e
Author: Sammi Chen 
Authored: Wed Jan 17 15:55:59 2018 +0800
Committer: Sammi Chen 
Committed: Wed Jan 17 16:36:03 2018 +0800

--
 .../dev-support/findbugs-exclude.xml|   8 +
 .../fs/aliyun/oss/AliyunOSSFileReaderTask.java  | 109 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  31 +++-
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 149 +--
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  13 +-
 .../apache/hadoop/fs/aliyun/oss/ReadBuffer.java |  86 +++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  49 ++
 8 files changed, 407 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/896dc7c7/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
index 40d78d0..c55f8e3 100644
--- a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
@@ -15,4 +15,12 @@
limitations under the License.
 -->
 
+
+
+
+
+
+
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/896dc7c7/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
new file mode 100644
index 000..e5bfc2c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Used by {@link AliyunOSSInputStream} as an task that submitted
+ * to the thread pool.
+ * Each AliyunOSSFileReaderTask reads one part of the file so that
+ * we can accelerate the sequential read.
+ */
+public class AliyunOSSFileReaderTask implements Runnable {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(AliyunOSSFileReaderTask.class);
+
+  private String key;
+  private AliyunOSSFileSystemStore store;
+  private ReadBuffer readBuffer;
+  private static final int MAX_RETRIES = 3;
+  private RetryPolicy retryPolicy;
+
+  public AliyunOSSFileReaderTask(String key, AliyunOSSFileSystemStore store,
+  ReadBuffer readBuffer) {
+this.key = key;
+this.store = store;
+this.readBuffer = readBuffer;
+RetryPolicy defaultPolicy =
+RetryPolicies.retryUpToMaximumCountWithFixedSleep(
+MAX_RETRIES, 3, TimeUnit.SECONDS);
+Map, RetryPolicy> pol

hadoop git commit: HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

2018-01-17 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 28f69755f -> 622f6b65d


HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential 
read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

(cherry picked from commit 9195a6e302028ed3921d1016ac2fa5754f06ebf0)
(cherry picked from commit 55142849db02a9191db0dd6f4e1401ff19ec242a)
(cherry picked from commit 082a707bae4bb97444a34c00eecd62975807388d)
(cherry picked from commit 896dc7c7801adaa4460fa6c19a4f452a6a6112d8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/622f6b65
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/622f6b65
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/622f6b65

Branch: refs/heads/branch-2.9
Commit: 622f6b65d684ce498a811784a229fb0386745711
Parents: 28f6975
Author: Sammi Chen 
Authored: Wed Jan 17 15:55:59 2018 +0800
Committer: Sammi Chen 
Committed: Wed Jan 17 16:37:25 2018 +0800

--
 .../dev-support/findbugs-exclude.xml|   8 +
 .../fs/aliyun/oss/AliyunOSSFileReaderTask.java  | 109 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  31 +++-
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 149 +--
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  13 +-
 .../apache/hadoop/fs/aliyun/oss/ReadBuffer.java |  86 +++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  49 ++
 8 files changed, 407 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/622f6b65/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
index 40d78d0..c55f8e3 100644
--- a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
@@ -15,4 +15,12 @@
limitations under the License.
 -->
 
+
+
+
+
+
+
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/622f6b65/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
new file mode 100644
index 000..e5bfc2c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Used by {@link AliyunOSSInputStream} as an task that submitted
+ * to the thread pool.
+ * Each AliyunOSSFileReaderTask reads one part of the file so that
+ * we can accelerate the sequential read.
+ */
+public class AliyunOSSFileReaderTask implements Runnable {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(AliyunOSSFileReaderTask.class);
+
+  private String key;
+  private AliyunOSSFileSystemStore store;
+  private ReadBuffer readBuffer;
+  private static final int MAX_RETRIES = 3;
+  private RetryPolicy retryPolicy;
+
+  public AliyunOSSFileReaderTask(String key, AliyunOSSFileSystemStore store,
+  ReadBuffer readBuffer) {
+this.key = key;
+this.store = store;
+this.readBuffer = readBuffer;
+RetryPolicy defaultPolicy =
+RetryPolicies.retryUpToMaximumCountWithFixedSleep(
+

hadoop git commit: HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

2018-01-29 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 673200ac1 -> 91184299c


HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential 
read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

(cherry picked from commit 9195a6e302028ed3921d1016ac2fa5754f06ebf0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/91184299
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/91184299
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/91184299

Branch: refs/heads/branch-3.0
Commit: 91184299c54df18540f841cca0efe0131d05b882
Parents: 673200a
Author: Sammi Chen 
Authored: Wed Jan 17 15:55:59 2018 +0800
Committer: Sammi Chen 
Committed: Tue Jan 30 15:21:27 2018 +0800

--
 .../dev-support/findbugs-exclude.xml|   8 +
 .../fs/aliyun/oss/AliyunOSSFileReaderTask.java  | 109 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  31 +++-
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 149 +--
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  13 +-
 .../apache/hadoop/fs/aliyun/oss/ReadBuffer.java |  86 +++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  49 ++
 8 files changed, 407 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/91184299/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
index 40d78d0..c55f8e3 100644
--- a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
@@ -15,4 +15,12 @@
limitations under the License.
 -->
 
+
+
+
+
+
+
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/91184299/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
new file mode 100644
index 000..e5bfc2c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Used by {@link AliyunOSSInputStream} as an task that submitted
+ * to the thread pool.
+ * Each AliyunOSSFileReaderTask reads one part of the file so that
+ * we can accelerate the sequential read.
+ */
+public class AliyunOSSFileReaderTask implements Runnable {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(AliyunOSSFileReaderTask.class);
+
+  private String key;
+  private AliyunOSSFileSystemStore store;
+  private ReadBuffer readBuffer;
+  private static final int MAX_RETRIES = 3;
+  private RetryPolicy retryPolicy;
+
+  public AliyunOSSFileReaderTask(String key, AliyunOSSFileSystemStore store,
+  ReadBuffer readBuffer) {
+this.key = key;
+this.store = store;
+this.readBuffer = readBuffer;
+RetryPolicy defaultPolicy =
+RetryPolicies.retryUpToMaximumCountWithFixedSleep(
+MAX_RETRIES, 3, TimeUnit.SECONDS);
+Map, RetryPolicy> policies = new HashMap<>();
+policies.put(IOException.class, defaultPolicy);
+policies.put(IndexOutOfBoundsException.class,
+

hadoop git commit: HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

2018-01-30 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 eda786ea1 -> 2816bd1f4


HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential 
read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

(cherry picked from commit 9195a6e302028ed3921d1016ac2fa5754f06ebf0)
(cherry picked from commit 91184299c54df18540f841cca0efe0131d05b882)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2816bd1f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2816bd1f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2816bd1f

Branch: refs/heads/branch-2
Commit: 2816bd1f43bca3734a8eaae0b228aaa72b575792
Parents: eda786e
Author: Sammi Chen 
Authored: Wed Jan 17 15:55:59 2018 +0800
Committer: Sammi Chen 
Committed: Tue Jan 30 16:25:02 2018 +0800

--
 .../dev-support/findbugs-exclude.xml|   8 +
 .../fs/aliyun/oss/AliyunOSSFileReaderTask.java  | 109 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  31 +++-
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 149 +--
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  13 +-
 .../apache/hadoop/fs/aliyun/oss/ReadBuffer.java |  86 +++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  49 ++
 8 files changed, 407 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2816bd1f/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
index 40d78d0..c55f8e3 100644
--- a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
@@ -15,4 +15,12 @@
limitations under the License.
 -->
 
+
+
+
+
+
+
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2816bd1f/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
new file mode 100644
index 000..e5bfc2c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Used by {@link AliyunOSSInputStream} as an task that submitted
+ * to the thread pool.
+ * Each AliyunOSSFileReaderTask reads one part of the file so that
+ * we can accelerate the sequential read.
+ */
+public class AliyunOSSFileReaderTask implements Runnable {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(AliyunOSSFileReaderTask.class);
+
+  private String key;
+  private AliyunOSSFileSystemStore store;
+  private ReadBuffer readBuffer;
+  private static final int MAX_RETRIES = 3;
+  private RetryPolicy retryPolicy;
+
+  public AliyunOSSFileReaderTask(String key, AliyunOSSFileSystemStore store,
+  ReadBuffer readBuffer) {
+this.key = key;
+this.store = store;
+this.readBuffer = readBuffer;
+RetryPolicy defaultPolicy =
+RetryPolicies.retryUpToMaximumCountWithFixedSleep(
+MAX_RETRIES, 3, TimeUnit.SECONDS);
+Map, RetryPolicy> policies = new HashMap<>();
+policies.put(IOException.class, default

hadoop git commit: HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

2018-01-30 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 a15df67f6 -> e30abdaaa


HADOOP-15027. AliyunOSS: Support multi-thread pre-read to improve sequential 
read from Hadoop to Aliyun OSS performance. (Contributed by Jinhu Wu)

(cherry picked from commit 9195a6e302028ed3921d1016ac2fa5754f06ebf0)
(cherry picked from commit 91184299c54df18540f841cca0efe0131d05b882)
(cherry picked from commit 2816bd1f43bca3734a8eaae0b228aaa72b575792)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e30abdaa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e30abdaa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e30abdaa

Branch: refs/heads/branch-2.9
Commit: e30abdaaaed5f1f7c162919a28bd1ba30b4679cc
Parents: a15df67
Author: Sammi Chen 
Authored: Wed Jan 17 15:55:59 2018 +0800
Committer: Sammi Chen 
Committed: Tue Jan 30 17:15:12 2018 +0800

--
 .../dev-support/findbugs-exclude.xml|   8 +
 .../fs/aliyun/oss/AliyunOSSFileReaderTask.java  | 109 ++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  31 +++-
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 149 +--
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  13 +-
 .../apache/hadoop/fs/aliyun/oss/ReadBuffer.java |  86 +++
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java |  49 ++
 8 files changed, 407 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e30abdaa/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
index 40d78d0..c55f8e3 100644
--- a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
@@ -15,4 +15,12 @@
limitations under the License.
 -->
 
+
+
+
+
+
+
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e30abdaa/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
new file mode 100644
index 000..e5bfc2c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileReaderTask.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.aliyun.oss;
+
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Used by {@link AliyunOSSInputStream} as an task that submitted
+ * to the thread pool.
+ * Each AliyunOSSFileReaderTask reads one part of the file so that
+ * we can accelerate the sequential read.
+ */
+public class AliyunOSSFileReaderTask implements Runnable {
+  public static final Logger LOG =
+  LoggerFactory.getLogger(AliyunOSSFileReaderTask.class);
+
+  private String key;
+  private AliyunOSSFileSystemStore store;
+  private ReadBuffer readBuffer;
+  private static final int MAX_RETRIES = 3;
+  private RetryPolicy retryPolicy;
+
+  public AliyunOSSFileReaderTask(String key, AliyunOSSFileSystemStore store,
+  ReadBuffer readBuffer) {
+this.key = key;
+this.store = store;
+this.readBuffer = readBuffer;
+RetryPolicy defaultPolicy =
+RetryPolicies.retryUpToMaximumCountWithFixedSleep(
+MAX_RETRIES, 3, TimeUnit.SECONDS);
+Map, RetryPolicy>

[1/2] hadoop git commit: HADOOP-14964. AliyunOSS: backport Aliyun OSS module to branch-2 and 2.8+ branches. Contributed by Sammi Chen.

2017-11-23 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 c08a0bcad -> 32a88442d


http://git-wip-us.apache.org/repos/asf/hadoop/blob/32a88442/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
new file mode 100644
index 000..62e6505
--- /dev/null
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -0,0 +1,294 @@
+
+
+# Hadoop-Aliyun module: Integration with Aliyun Web Services
+
+
+
+## Overview
+
+The `hadoop-aliyun` module provides support for Aliyun integration with
+[Aliyun Object Storage Service (Aliyun 
OSS)](https://www.aliyun.com/product/oss).
+The generated JAR file, `hadoop-aliyun.jar` also declares a transitive
+dependency on all external artifacts which are needed for this support — 
enabling
+downstream applications to easily use this support.
+
+To make it part of Apache Hadoop's default classpath, simply make sure
+that HADOOP_OPTIONAL_TOOLS in hadoop-env.sh has 'hadoop-aliyun' in the list.
+
+### Features
+
+* Read and write data stored in Aliyun OSS.
+* Present a hierarchical file system view by implementing the standard Hadoop
+[`FileSystem`](../api/org/apache/hadoop/fs/FileSystem.html) interface.
+* Can act as a source of data in a MapReduce job, or a sink.
+
+### Warning #1: Object Stores are not filesystems.
+
+Aliyun OSS is an example of "an object store". In order to achieve scalability
+and especially high availability, Aliyun OSS has relaxed some of the 
constraints
+which classic "POSIX" filesystems promise.
+
+
+
+Specifically
+
+1. Atomic operations: `delete()` and `rename()` are implemented by recursive
+file-by-file operations. They take time at least proportional to the number of 
files,
+during which time partial updates may be visible. `delete()` and `rename()`
+can not guarantee atomicity. If the operations are interrupted, the filesystem
+is left in an intermediate state.
+2. File owner and group are persisted, but the permissions model is not 
enforced.
+Authorization occurs at the level of the entire Aliyun account via
+[Aliyun Resource Access Management (Aliyun 
RAM)](https://www.aliyun.com/product/ram).
+3. Directory last access time is not tracked.
+4. The append operation is not supported.
+
+### Warning #2: Directory last access time is not tracked,
+features of Hadoop relying on this can have unexpected behaviour. E.g. the
+AggregatedLogDeletionService of YARN will not remove the appropriate logfiles.
+
+### Warning #3: Your Aliyun credentials are valuable
+
+Your Aliyun credentials not only pay for services, they offer read and write
+access to the data. Anyone with the account can not only read your datasets
+—they can delete them.
+
+Do not inadvertently share these credentials through means such as
+1. Checking in to SCM any configuration files containing the secrets.
+2. Logging them to a console, as they invariably end up being seen.
+3. Defining filesystem URIs with the credentials in the URL, such as
+`oss://accessKeyId:accessKeySecret@directory/file`. They will end up in
+logs and error messages.
+4. Including the secrets in bug reports.
+
+If you do any of these: change your credentials immediately!
+
+### Warning #4: The Aliyun OSS client provided by Aliyun E-MapReduce are 
different from this implementation
+
+Specifically: on Aliyun E-MapReduce, `oss://` is also supported but with
+a different implementation. If you are using Aliyun E-MapReduce,
+follow these instructions —and be aware that all issues related to Aliyun
+OSS integration in E-MapReduce can only be addressed by Aliyun themselves:
+please raise your issues with them.
+
+## OSS
+
+### Authentication properties
+
+
+  fs.oss.accessKeyId
+  Aliyun access key ID
+
+
+
+  fs.oss.accessKeySecret
+  Aliyun access key secret
+
+
+
+  fs.oss.credentials.provider
+  
+Class name of a credentials provider that implements
+com.aliyun.oss.common.auth.CredentialsProvider. Omit if using 
access/secret keys
+or another authentication mechanism. The specified class must provide 
an
+accessible constructor accepting java.net.URI and
+org.apache.hadoop.conf.Configuration, or an accessible default 
constructor.
+  
+
+
+### Other properties
+
+
+  fs.oss.endpoint
+  Aliyun OSS endpoint to connect to. An up-to-date list is
+provided in the Aliyun OSS Documentation.
+   
+
+
+
+  fs.oss.proxy.host
+  Hostname of the (optinal) proxy server for Aliyun OSS 
connection
+
+
+
+  fs.oss.proxy.port
+  Proxy server port
+
+
+
+  fs.oss.proxy.username
+  Username for authenticating with proxy server
+
+
+
+  fs.oss.proxy.password
+  Password fo

[2/2] hadoop git commit: HADOOP-14964. AliyunOSS: backport Aliyun OSS module to branch-2 and 2.8+ branches. Contributed by Sammi Chen.

2017-11-23 Thread sammichen
HADOOP-14964. AliyunOSS: backport Aliyun OSS module to branch-2 and 2.8+ 
branches. Contributed by Sammi Chen.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/32a88442
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/32a88442
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/32a88442

Branch: refs/heads/branch-2.9
Commit: 32a88442d0f9e9860b1f179da586894cea6a9e10
Parents: c08a0bc
Author: Sammi Chen 
Authored: Fri Nov 24 14:06:01 2017 +0800
Committer: Sammi Chen 
Committed: Fri Nov 24 14:06:01 2017 +0800

--
 hadoop-project/pom.xml  |  22 +-
 .../dev-support/findbugs-exclude.xml|  18 +
 hadoop-tools/hadoop-aliyun/pom.xml  | 147 +
 .../aliyun/oss/AliyunCredentialsProvider.java   |  87 +++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  | 608 +++
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 549 +
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 262 
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 167 +
 .../apache/hadoop/fs/aliyun/oss/Constants.java  | 113 
 .../hadoop/fs/aliyun/oss/package-info.java  |  22 +
 .../site/markdown/tools/hadoop-aliyun/index.md  | 294 +
 .../fs/aliyun/oss/AliyunOSSTestUtils.java   |  77 +++
 .../fs/aliyun/oss/TestAliyunCredentials.java|  78 +++
 .../oss/TestAliyunOSSFileSystemContract.java| 218 +++
 .../oss/TestAliyunOSSFileSystemStore.java   | 125 
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java | 155 +
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 +++
 .../aliyun/oss/contract/AliyunOSSContract.java  |  49 ++
 .../contract/TestAliyunOSSContractCreate.java   |  35 ++
 .../contract/TestAliyunOSSContractDelete.java   |  34 ++
 .../contract/TestAliyunOSSContractDistCp.java   |  44 ++
 .../TestAliyunOSSContractGetFileStatus.java |  35 ++
 .../contract/TestAliyunOSSContractMkdir.java|  34 ++
 .../oss/contract/TestAliyunOSSContractOpen.java |  34 ++
 .../contract/TestAliyunOSSContractRename.java   |  35 ++
 .../contract/TestAliyunOSSContractRootDir.java  |  69 +++
 .../oss/contract/TestAliyunOSSContractSeek.java |  60 ++
 .../src/test/resources/contract/aliyun-oss.xml  | 120 
 .../src/test/resources/core-site.xml|  46 ++
 .../src/test/resources/log4j.properties |  23 +
 hadoop-tools/hadoop-tools-dist/pom.xml  |   6 +
 hadoop-tools/pom.xml|   1 +
 33 files changed, 3768 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/32a88442/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 4eac7d5..4bb640f 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -483,7 +483,11 @@
 hadoop-aws
 ${project.version}
   
-
+  
+org.apache.hadoop
+hadoop-aliyun
+${project.version}
+  
   
 org.apache.hadoop
 hadoop-kms
@@ -1078,6 +1082,22 @@
 2.9.1
   
 
+  
+com.aliyun.oss
+aliyun-sdk-oss
+2.8.1
+
+  
+org.apache.httpcomponents
+httpclient
+  
+  
+commons-beanutils
+commons-beanutils
+  
+
+ 
+
  
org.apache.curator
curator-recipes

http://git-wip-us.apache.org/repos/asf/hadoop/blob/32a88442/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
new file mode 100644
index 000..40d78d0
--- /dev/null
+++ b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
@@ -0,0 +1,18 @@
+
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/32a88442/hadoop-tools/hadoop-aliyun/pom.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/pom.xml 
b/hadoop-tools/hadoop-aliyun/pom.xml
new file mode 100644
index 000..357786b
--- /dev/null
+++ b/hadoop-tools/hadoop-aliyun/pom.xml
@@ -0,0 +1,147 @@
+
+
+http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
+  4.0.0
+  
+org.apache.hadoop
+hadoop-project
+2.9.1-SNAPSHOT
+../../hadoop-project
+  
+  hadoop-aliyun
+  Apache Hadoop Aliyun OSS support
+  jar
+
+  
+UTF-8
+true
+  
+
+  
+
+  tests-off
+  
+
+  src/test/reso

[4/4] hadoop git commit: HADOOP-14964. AliyunOSS: backport Aliyun OSS module to branch-2. Contributed by Sammi Chen.

2017-11-26 Thread sammichen
 HADOOP-14964. AliyunOSS: backport Aliyun OSS module to branch-2. Contributed 
by Sammi Chen.

The consolidated commits in this backport are as follows:
HADOOP-14787. AliyunOSS: Implement the `createNonRecursive` operator.
HADOOP-14649. Update aliyun-sdk-oss version to 2.8.1. (Genmao Yu via 
rchiang)
HADOOP-14194. Aliyun OSS should not use empty endpoint as default. 
Contributed by Genmao Yu
HADOOP-14466. Remove useless document from 
TestAliyunOSSFileSystemContract.java. Contributed by Chen Liang.
HADOOP-14458. Add missing imports to 
TestAliyunOSSFileSystemContract.java. Contributed by Mingliang Liu.
HADOOP-14192. AliyunOSS FileSystem contract test should implement 
getTestBaseDir(). Contributed by Mingliang Liu
HADOOP-14072. AliyunOSS: Failed to read from stream when seek beyond 
the download size. Contributed by Genmao Yu
HADOOP-13769. AliyunOSS: update oss sdk version. Contributed by Genmao 
Yu
HADOOP-14069. AliyunOSS: listStatus returns wrong file info. 
Contributed by Fei Hui
HADOOP-13768. AliyunOSS: handle the failure in the batch delete 
operation `deleteDirs`. Contributed by Genmao Yu
HADOOP-14065. AliyunOSS: oss directory filestatus should use meta time. 
Contributed by Fei Hui
HADOOP-14045. Aliyun OSS documentation missing from website. 
Contributed by Yiqun Lin.
HADOOP-13723. AliyunOSSInputStream#read() should update read bytes stat 
correctly. Contributed by Mingliang Liu
HADOOP-13624.  Rename TestAliyunOSSContractDispCp. Contributed by 
Genmao Yu
HADOOP-13591. Unit test failure in TestOSSContractGetFileStatus and 
TestOSSContractRootDir. Contributed by Genmao Yu
HADOOP-13481. User documents for Aliyun OSS FileSystem. Contributed by 
Genmao Yu.
HADOOP-12756. Incorporate Aliyun OSS file system implementation. 
Contributed by Mingfei Shi and Lin Zhou

 (cherry picked from commit 30ab9b6aef2e3d31f2a8fc9211b5324b3d42f18e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b756beb6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b756beb6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b756beb6

Branch: refs/heads/branch-2.9
Commit: b756beb6793f1d283703749c1fab92a42325ef6e
Parents: 12901cd
Author: Sammi Chen 
Authored: Mon Nov 27 11:35:17 2017 +0800
Committer: Sammi Chen 
Committed: Mon Nov 27 11:35:17 2017 +0800

--
 hadoop-project/pom.xml  |  22 +-
 .../dev-support/findbugs-exclude.xml|  18 +
 hadoop-tools/hadoop-aliyun/pom.xml  | 147 +
 .../aliyun/oss/AliyunCredentialsProvider.java   |  87 +++
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  | 608 +++
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 549 +
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 262 
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 167 +
 .../apache/hadoop/fs/aliyun/oss/Constants.java  | 113 
 .../hadoop/fs/aliyun/oss/package-info.java  |  22 +
 .../site/markdown/tools/hadoop-aliyun/index.md  | 294 +
 .../fs/aliyun/oss/AliyunOSSTestUtils.java   |  77 +++
 .../fs/aliyun/oss/TestAliyunCredentials.java|  78 +++
 .../oss/TestAliyunOSSFileSystemContract.java| 218 +++
 .../oss/TestAliyunOSSFileSystemStore.java   | 125 
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java | 155 +
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 +++
 .../aliyun/oss/contract/AliyunOSSContract.java  |  49 ++
 .../contract/TestAliyunOSSContractCreate.java   |  35 ++
 .../contract/TestAliyunOSSContractDelete.java   |  34 ++
 .../contract/TestAliyunOSSContractDistCp.java   |  44 ++
 .../TestAliyunOSSContractGetFileStatus.java |  35 ++
 .../contract/TestAliyunOSSContractMkdir.java|  34 ++
 .../oss/contract/TestAliyunOSSContractOpen.java |  34 ++
 .../contract/TestAliyunOSSContractRename.java   |  35 ++
 .../contract/TestAliyunOSSContractRootDir.java  |  69 +++
 .../oss/contract/TestAliyunOSSContractSeek.java |  60 ++
 .../src/test/resources/contract/aliyun-oss.xml  | 120 
 .../src/test/resources/core-site.xml|  46 ++
 .../src/test/resources/log4j.properties |  23 +
 hadoop-tools/hadoop-tools-dist/pom.xml  |   6 +
 hadoop-tools/pom.xml|   1 +
 33 files changed, 3768 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b756beb6/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 4eac7d5..4bb640f 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -483,7 +483,11 @@
 

[2/4] hadoop git commit: Revert "HADOOP-14964. AliyunOSS: backport Aliyun OSS module to branch-2 and 2.8+ branches. Contributed by Sammi Chen."

2017-11-26 Thread sammichen
Revert "HADOOP-14964. AliyunOSS: backport Aliyun OSS module to branch-2 and 
2.8+ branches. Contributed by Sammi Chen."

This reverts commit 32a88442d0f9e9860b1f179da586894cea6a9e10.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/12901cdc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/12901cdc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/12901cdc

Branch: refs/heads/branch-2.9
Commit: 12901cdceefecafc562c9ad47f158d5e5139a026
Parents: 32a8844
Author: Sammi Chen 
Authored: Mon Nov 27 10:46:15 2017 +0800
Committer: Sammi Chen 
Committed: Mon Nov 27 10:46:15 2017 +0800

--
 hadoop-project/pom.xml  |  22 +-
 .../dev-support/findbugs-exclude.xml|  18 -
 hadoop-tools/hadoop-aliyun/pom.xml  | 147 -
 .../aliyun/oss/AliyunCredentialsProvider.java   |  87 ---
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  | 608 ---
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 549 -
 .../fs/aliyun/oss/AliyunOSSInputStream.java | 262 
 .../fs/aliyun/oss/AliyunOSSOutputStream.java| 111 
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java| 167 -
 .../apache/hadoop/fs/aliyun/oss/Constants.java  | 113 
 .../hadoop/fs/aliyun/oss/package-info.java  |  22 -
 .../site/markdown/tools/hadoop-aliyun/index.md  | 294 -
 .../fs/aliyun/oss/AliyunOSSTestUtils.java   |  77 ---
 .../fs/aliyun/oss/TestAliyunCredentials.java|  78 ---
 .../oss/TestAliyunOSSFileSystemContract.java| 218 ---
 .../oss/TestAliyunOSSFileSystemStore.java   | 125 
 .../fs/aliyun/oss/TestAliyunOSSInputStream.java | 155 -
 .../aliyun/oss/TestAliyunOSSOutputStream.java   |  91 ---
 .../aliyun/oss/contract/AliyunOSSContract.java  |  49 --
 .../contract/TestAliyunOSSContractCreate.java   |  35 --
 .../contract/TestAliyunOSSContractDelete.java   |  34 --
 .../contract/TestAliyunOSSContractDistCp.java   |  44 --
 .../TestAliyunOSSContractGetFileStatus.java |  35 --
 .../contract/TestAliyunOSSContractMkdir.java|  34 --
 .../oss/contract/TestAliyunOSSContractOpen.java |  34 --
 .../contract/TestAliyunOSSContractRename.java   |  35 --
 .../contract/TestAliyunOSSContractRootDir.java  |  69 ---
 .../oss/contract/TestAliyunOSSContractSeek.java |  60 --
 .../src/test/resources/contract/aliyun-oss.xml  | 120 
 .../src/test/resources/core-site.xml|  46 --
 .../src/test/resources/log4j.properties |  23 -
 hadoop-tools/hadoop-tools-dist/pom.xml  |   6 -
 hadoop-tools/pom.xml|   1 -
 33 files changed, 1 insertion(+), 3768 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/12901cdc/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 4bb640f..4eac7d5 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -483,11 +483,7 @@
 hadoop-aws
 ${project.version}
   
-  
-org.apache.hadoop
-hadoop-aliyun
-${project.version}
-  
+
   
 org.apache.hadoop
 hadoop-kms
@@ -1082,22 +1078,6 @@
 2.9.1
   
 
-  
-com.aliyun.oss
-aliyun-sdk-oss
-2.8.1
-
-  
-org.apache.httpcomponents
-httpclient
-  
-  
-commons-beanutils
-commons-beanutils
-  
-
- 
-
  
org.apache.curator
curator-recipes

http://git-wip-us.apache.org/repos/asf/hadoop/blob/12901cdc/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
deleted file mode 100644
index 40d78d0..000
--- a/hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/12901cdc/hadoop-tools/hadoop-aliyun/pom.xml
--
diff --git a/hadoop-tools/hadoop-aliyun/pom.xml 
b/hadoop-tools/hadoop-aliyun/pom.xml
deleted file mode 100644
index 357786b..000
--- a/hadoop-tools/hadoop-aliyun/pom.xml
+++ /dev/null
@@ -1,147 +0,0 @@
-
-
-http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
-  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
-  4.0.0
-  
-org.apache.hadoop
-hadoop-project
-2.9.1-SNAPSHOT
-../../hadoop-project
-  
-  hadoop-aliyun
-  Apache Hadoop Aliyun OSS support
-  jar
-
-  
-UTF-8
-

[1/4] hadoop git commit: Revert "HADOOP-14964. AliyunOSS: backport Aliyun OSS module to branch-2 and 2.8+ branches. Contributed by Sammi Chen."

2017-11-26 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 32a88442d -> b756beb67


http://git-wip-us.apache.org/repos/asf/hadoop/blob/12901cdc/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
deleted file mode 100644
index 62e6505..000
--- a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
+++ /dev/null
@@ -1,294 +0,0 @@
-
-
-# Hadoop-Aliyun module: Integration with Aliyun Web Services
-
-
-
-## Overview
-
-The `hadoop-aliyun` module provides support for Aliyun integration with
-[Aliyun Object Storage Service (Aliyun 
OSS)](https://www.aliyun.com/product/oss).
-The generated JAR file, `hadoop-aliyun.jar` also declares a transitive
-dependency on all external artifacts which are needed for this support — 
enabling
-downstream applications to easily use this support.
-
-To make it part of Apache Hadoop's default classpath, simply make sure
-that HADOOP_OPTIONAL_TOOLS in hadoop-env.sh has 'hadoop-aliyun' in the list.
-
-### Features
-
-* Read and write data stored in Aliyun OSS.
-* Present a hierarchical file system view by implementing the standard Hadoop
-[`FileSystem`](../api/org/apache/hadoop/fs/FileSystem.html) interface.
-* Can act as a source of data in a MapReduce job, or a sink.
-
-### Warning #1: Object Stores are not filesystems.
-
-Aliyun OSS is an example of "an object store". In order to achieve scalability
-and especially high availability, Aliyun OSS has relaxed some of the 
constraints
-which classic "POSIX" filesystems promise.
-
-
-
-Specifically
-
-1. Atomic operations: `delete()` and `rename()` are implemented by recursive
-file-by-file operations. They take time at least proportional to the number of 
files,
-during which time partial updates may be visible. `delete()` and `rename()`
-can not guarantee atomicity. If the operations are interrupted, the filesystem
-is left in an intermediate state.
-2. File owner and group are persisted, but the permissions model is not 
enforced.
-Authorization occurs at the level of the entire Aliyun account via
-[Aliyun Resource Access Management (Aliyun 
RAM)](https://www.aliyun.com/product/ram).
-3. Directory last access time is not tracked.
-4. The append operation is not supported.
-
-### Warning #2: Directory last access time is not tracked,
-features of Hadoop relying on this can have unexpected behaviour. E.g. the
-AggregatedLogDeletionService of YARN will not remove the appropriate logfiles.
-
-### Warning #3: Your Aliyun credentials are valuable
-
-Your Aliyun credentials not only pay for services, they offer read and write
-access to the data. Anyone with the account can not only read your datasets
-—they can delete them.
-
-Do not inadvertently share these credentials through means such as
-1. Checking in to SCM any configuration files containing the secrets.
-2. Logging them to a console, as they invariably end up being seen.
-3. Defining filesystem URIs with the credentials in the URL, such as
-`oss://accessKeyId:accessKeySecret@directory/file`. They will end up in
-logs and error messages.
-4. Including the secrets in bug reports.
-
-If you do any of these: change your credentials immediately!
-
-### Warning #4: The Aliyun OSS client provided by Aliyun E-MapReduce are 
different from this implementation
-
-Specifically: on Aliyun E-MapReduce, `oss://` is also supported but with
-a different implementation. If you are using Aliyun E-MapReduce,
-follow these instructions —and be aware that all issues related to Aliyun
-OSS integration in E-MapReduce can only be addressed by Aliyun themselves:
-please raise your issues with them.
-
-## OSS
-
-### Authentication properties
-
-
-  fs.oss.accessKeyId
-  Aliyun access key ID
-
-
-
-  fs.oss.accessKeySecret
-  Aliyun access key secret
-
-
-
-  fs.oss.credentials.provider
-  
-Class name of a credentials provider that implements
-com.aliyun.oss.common.auth.CredentialsProvider. Omit if using 
access/secret keys
-or another authentication mechanism. The specified class must provide 
an
-accessible constructor accepting java.net.URI and
-org.apache.hadoop.conf.Configuration, or an accessible default 
constructor.
-  
-
-
-### Other properties
-
-
-  fs.oss.endpoint
-  Aliyun OSS endpoint to connect to. An up-to-date list is
-provided in the Aliyun OSS Documentation.
-   
-
-
-
-  fs.oss.proxy.host
-  Hostname of the (optinal) proxy server for Aliyun OSS 
connection
-
-
-
-  fs.oss.proxy.port
-  Proxy server port
-
-
-
-  fs.oss.proxy.username
-  Username for authenticating with proxy server
-
-
-
-  fs.oss.proxy.password
-  Passwor

[3/4] hadoop git commit: HADOOP-14964. AliyunOSS: backport Aliyun OSS module to branch-2. Contributed by Sammi Chen.

2017-11-26 Thread sammichen
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b756beb6/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
new file mode 100644
index 000..62e6505
--- /dev/null
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -0,0 +1,294 @@
+
+
+# Hadoop-Aliyun module: Integration with Aliyun Web Services
+
+
+
+## Overview
+
+The `hadoop-aliyun` module provides support for Aliyun integration with
+[Aliyun Object Storage Service (Aliyun 
OSS)](https://www.aliyun.com/product/oss).
+The generated JAR file, `hadoop-aliyun.jar` also declares a transitive
+dependency on all external artifacts which are needed for this support — 
enabling
+downstream applications to easily use this support.
+
+To make it part of Apache Hadoop's default classpath, simply make sure
+that HADOOP_OPTIONAL_TOOLS in hadoop-env.sh has 'hadoop-aliyun' in the list.
+
+### Features
+
+* Read and write data stored in Aliyun OSS.
+* Present a hierarchical file system view by implementing the standard Hadoop
+[`FileSystem`](../api/org/apache/hadoop/fs/FileSystem.html) interface.
+* Can act as a source of data in a MapReduce job, or a sink.
+
+### Warning #1: Object Stores are not filesystems.
+
+Aliyun OSS is an example of "an object store". In order to achieve scalability
+and especially high availability, Aliyun OSS has relaxed some of the 
constraints
+which classic "POSIX" filesystems promise.
+
+
+
+Specifically
+
+1. Atomic operations: `delete()` and `rename()` are implemented by recursive
+file-by-file operations. They take time at least proportional to the number of 
files,
+during which time partial updates may be visible. `delete()` and `rename()`
+can not guarantee atomicity. If the operations are interrupted, the filesystem
+is left in an intermediate state.
+2. File owner and group are persisted, but the permissions model is not 
enforced.
+Authorization occurs at the level of the entire Aliyun account via
+[Aliyun Resource Access Management (Aliyun 
RAM)](https://www.aliyun.com/product/ram).
+3. Directory last access time is not tracked.
+4. The append operation is not supported.
+
+### Warning #2: Directory last access time is not tracked,
+features of Hadoop relying on this can have unexpected behaviour. E.g. the
+AggregatedLogDeletionService of YARN will not remove the appropriate logfiles.
+
+### Warning #3: Your Aliyun credentials are valuable
+
+Your Aliyun credentials not only pay for services, they offer read and write
+access to the data. Anyone with the account can not only read your datasets
+—they can delete them.
+
+Do not inadvertently share these credentials through means such as
+1. Checking in to SCM any configuration files containing the secrets.
+2. Logging them to a console, as they invariably end up being seen.
+3. Defining filesystem URIs with the credentials in the URL, such as
+`oss://accessKeyId:accessKeySecret@directory/file`. They will end up in
+logs and error messages.
+4. Including the secrets in bug reports.
+
+If you do any of these: change your credentials immediately!
+
+### Warning #4: The Aliyun OSS client provided by Aliyun E-MapReduce are 
different from this implementation
+
+Specifically: on Aliyun E-MapReduce, `oss://` is also supported but with
+a different implementation. If you are using Aliyun E-MapReduce,
+follow these instructions —and be aware that all issues related to Aliyun
+OSS integration in E-MapReduce can only be addressed by Aliyun themselves:
+please raise your issues with them.
+
+## OSS
+
+### Authentication properties
+
+
+  fs.oss.accessKeyId
+  Aliyun access key ID
+
+
+
+  fs.oss.accessKeySecret
+  Aliyun access key secret
+
+
+
+  fs.oss.credentials.provider
+  
+Class name of a credentials provider that implements
+com.aliyun.oss.common.auth.CredentialsProvider. Omit if using 
access/secret keys
+or another authentication mechanism. The specified class must provide 
an
+accessible constructor accepting java.net.URI and
+org.apache.hadoop.conf.Configuration, or an accessible default 
constructor.
+  
+
+
+### Other properties
+
+
+  fs.oss.endpoint
+  Aliyun OSS endpoint to connect to. An up-to-date list is
+provided in the Aliyun OSS Documentation.
+   
+
+
+
+  fs.oss.proxy.host
+  Hostname of the (optinal) proxy server for Aliyun OSS 
connection
+
+
+
+  fs.oss.proxy.port
+  Proxy server port
+
+
+
+  fs.oss.proxy.username
+  Username for authenticating with proxy server
+
+
+
+  fs.oss.proxy.password
+  Password for authenticating with proxy server.
+
+
+
+  fs.oss.proxy.domain
+  Do

hadoop git commit: HADOOP-15080. Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x json-lib.

2017-12-07 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk e411dd666 -> 67b2661e3


HADOOP-15080.  Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its 
dependency on Cat-x json-lib.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/67b2661e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/67b2661e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/67b2661e

Branch: refs/heads/trunk
Commit: 67b2661e3d73a68ba7ca73b112bf6baea128631e
Parents: e411dd6
Author: Sammi Chen 
Authored: Thu Dec 7 22:46:11 2017 +0800
Committer: Sammi Chen 
Committed: Thu Dec 7 22:46:11 2017 +0800

--
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/67b2661e/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 04b93c4..0866f3e 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1165,7 +1165,7 @@
   
 com.aliyun.oss
 aliyun-sdk-oss
-2.8.1
+2.8.3
 
   
 org.apache.httpcomponents


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15080. Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x json-lib

2017-12-07 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0.0 7f354097a -> cb307d5b8


HADOOP-15080. Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its 
dependency on Cat-x json-lib


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cb307d5b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cb307d5b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cb307d5b

Branch: refs/heads/branch-3.0.0
Commit: cb307d5b8e1184b247e0e7a224c9535e1ef3c671
Parents: 7f35409
Author: Sammi Chen 
Authored: Thu Dec 7 23:11:40 2017 +0800
Committer: Sammi Chen 
Committed: Thu Dec 7 23:11:40 2017 +0800

--
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb307d5b/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 47a26f8..cc3f11d2 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1154,7 +1154,7 @@
   
 com.aliyun.oss
 aliyun-sdk-oss
-2.8.1
+2.8.3
 
   
 org.apache.httpcomponents


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15080. Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x json-lib

2017-12-07 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 27c7a1f22 -> afcbfbf7f


HADOOP-15080. Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its 
dependency on Cat-x json-lib


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/afcbfbf7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/afcbfbf7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/afcbfbf7

Branch: refs/heads/branch-3.0
Commit: afcbfbf7f4184e7f3d83e1bf7340eadda7c914d6
Parents: 27c7a1f
Author: Sammi Chen 
Authored: Thu Dec 7 23:15:15 2017 +0800
Committer: Sammi Chen 
Committed: Thu Dec 7 23:15:15 2017 +0800

--
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/afcbfbf7/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 96e520e..92ed9be 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1154,7 +1154,7 @@
   
 com.aliyun.oss
 aliyun-sdk-oss
-2.8.1
+2.8.3
 
   
 org.apache.httpcomponents


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. Contributed by Genmao Yu

2017-12-08 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 029714e3e -> 96adff737


HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. 
Contributed by Genmao Yu

(cherry picked from commit cde56b9cefe1eb2943eef56a6aa7fdfa1b78e909)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/96adff73
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/96adff73
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/96adff73

Branch: refs/heads/branch-2
Commit: 96adff737041774542aec97c5f036003e389389d
Parents: 029714e
Author: Sammi Chen 
Authored: Fri Dec 8 20:08:16 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 8 20:08:16 2017 +0800

--
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/96adff73/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
--
diff --git a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml 
b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
index afd88cf..b5759cf 100644
--- a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
+++ b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
@@ -110,6 +110,11 @@
 
 
   org.apache.hadoop
+  hadoop-aliyun
+  compile
+
+
+  org.apache.hadoop
   hadoop-aws
   compile
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. Contributed by Genmao Yu.

2017-12-08 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 d06281486 -> 0f95a1d26


HADOOP-14997. Add hadoop-aliyun as dependency of hadoop-cloud-storage. 
Contributed by Genmao Yu.

(cherry picked from commit 96adff737041774542aec97c5f036003e389389d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0f95a1d2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0f95a1d2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0f95a1d2

Branch: refs/heads/branch-2.9
Commit: 0f95a1d26a2f3ffbd9c1168e97165bbd6409cca9
Parents: d062814
Author: Sammi Chen 
Authored: Fri Dec 8 20:15:32 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 8 20:15:32 2017 +0800

--
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0f95a1d2/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
--
diff --git a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml 
b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
index 4c06df2..0fe1132 100644
--- a/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
+++ b/hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml
@@ -110,6 +110,11 @@
 
 
   org.apache.hadoop
+  hadoop-aliyun
+  compile
+
+
+  org.apache.hadoop
   hadoop-aws
   compile
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14993. AliyunOSS: Override listFiles and listLocatedStatus. Contributed Genmao Yu.

2017-12-08 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 fc9e15648 -> fb809e05d


HADOOP-14993. AliyunOSS: Override listFiles and listLocatedStatus. Contributed 
Genmao Yu.

(cherry picked from commit 18621af7ae8f8ed703245744f8f2a770d07bbfb9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fb809e05
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fb809e05
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fb809e05

Branch: refs/heads/branch-3.0
Commit: fb809e05dca29b87d730ccd2123220d5a3b7c479
Parents: fc9e156
Author: Sammi Chen 
Authored: Fri Dec 8 20:40:14 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 8 20:40:14 2017 +0800

--
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  75 +--
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 106 
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../fs/aliyun/oss/FileStatusAcceptor.java   | 125 +++
 .../site/markdown/tools/hadoop-aliyun/index.md  |   6 +-
 5 files changed, 309 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb809e05/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index 3561b02..41d475d 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -28,14 +28,18 @@ import java.util.List;
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.BlockLocation;
 import org.apache.hadoop.fs.CreateFlag;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
 import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.util.Progressable;
 
@@ -46,6 +50,7 @@ import com.aliyun.oss.model.ObjectMetadata;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSUtils.objectRepresentsDirectory;
 import static org.apache.hadoop.fs.aliyun.oss.Constants.*;
 
 /**
@@ -60,6 +65,12 @@ public class AliyunOSSFileSystem extends FileSystem {
   private Path workingDir;
   private AliyunOSSFileSystemStore store;
   private int maxKeys;
+  private static final PathFilter DEFAULT_FILTER = new PathFilter() {
+@Override
+public boolean accept(Path file) {
+  return true;
+}
+  };
 
   @Override
   public FSDataOutputStream append(Path path, int bufferSize,
@@ -302,18 +313,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
 
   /**
-   * Check if OSS object represents a directory.
-   *
-   * @param name object key
-   * @param size object content length
-   * @return true if object represents a directory
-   */
-  private boolean objectRepresentsDirectory(final String name,
-  final long size) {
-return StringUtils.isNotEmpty(name) && name.endsWith("/") && size == 0L;
-  }
-
-  /**
* Turn a path (relative or otherwise) into an OSS key.
*
* @param path the path of the file.
@@ -404,6 +403,58 @@ public class AliyunOSSFileSystem extends FileSystem {
 return result.toArray(new FileStatus[result.size()]);
   }
 
+  @Override
+  public RemoteIterator listFiles(
+  final Path f, final boolean recursive) throws IOException {
+Path qualifiedPath = f.makeQualified(uri, workingDir);
+final FileStatus status = getFileStatus(qualifiedPath);
+PathFilter filter = new PathFilter() {
+  @Override
+  public boolean accept(Path path) {
+return status.isFile() || !path.equals(f);
+  }
+};
+FileStatusAcceptor acceptor =
+new FileStatusAcceptor.AcceptFilesOnly(qualifiedPath);
+return innerList(f, status, filter, acceptor, recursive);
+  }
+
+  @Override
+  public RemoteIterator listLocatedStatus(Path f)
+  throws IOException {
+return listLocatedStatus(f, DEFAULT_FILTER);
+  }
+
+  @Override
+  public RemoteIterator listLocatedStatus(final Path f,
+  final PathFilter filter) throws IOExcep

hadoop git commit: HADOOP-14993. AliyunOSS: Override listFiles and listLocatedStatus. Contributed Genmao Yu.

2017-12-08 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0.0 e309d25d2 -> 1e0f7a163


HADOOP-14993. AliyunOSS: Override listFiles and listLocatedStatus. Contributed 
Genmao Yu.

(cherry picked from commit fb809e05dca29b87d730ccd2123220d5a3b7c479)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1e0f7a16
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1e0f7a16
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1e0f7a16

Branch: refs/heads/branch-3.0.0
Commit: 1e0f7a163144f43260aa0a2e501b981699bf767e
Parents: e309d25
Author: Sammi Chen 
Authored: Fri Dec 8 20:44:40 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 8 20:44:40 2017 +0800

--
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  75 +--
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 106 
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../fs/aliyun/oss/FileStatusAcceptor.java   | 125 +++
 .../site/markdown/tools/hadoop-aliyun/index.md  |   6 +-
 5 files changed, 309 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1e0f7a16/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index 3561b02..41d475d 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -28,14 +28,18 @@ import java.util.List;
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.BlockLocation;
 import org.apache.hadoop.fs.CreateFlag;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
 import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.util.Progressable;
 
@@ -46,6 +50,7 @@ import com.aliyun.oss.model.ObjectMetadata;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSUtils.objectRepresentsDirectory;
 import static org.apache.hadoop.fs.aliyun.oss.Constants.*;
 
 /**
@@ -60,6 +65,12 @@ public class AliyunOSSFileSystem extends FileSystem {
   private Path workingDir;
   private AliyunOSSFileSystemStore store;
   private int maxKeys;
+  private static final PathFilter DEFAULT_FILTER = new PathFilter() {
+@Override
+public boolean accept(Path file) {
+  return true;
+}
+  };
 
   @Override
   public FSDataOutputStream append(Path path, int bufferSize,
@@ -302,18 +313,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
 
   /**
-   * Check if OSS object represents a directory.
-   *
-   * @param name object key
-   * @param size object content length
-   * @return true if object represents a directory
-   */
-  private boolean objectRepresentsDirectory(final String name,
-  final long size) {
-return StringUtils.isNotEmpty(name) && name.endsWith("/") && size == 0L;
-  }
-
-  /**
* Turn a path (relative or otherwise) into an OSS key.
*
* @param path the path of the file.
@@ -404,6 +403,58 @@ public class AliyunOSSFileSystem extends FileSystem {
 return result.toArray(new FileStatus[result.size()]);
   }
 
+  @Override
+  public RemoteIterator listFiles(
+  final Path f, final boolean recursive) throws IOException {
+Path qualifiedPath = f.makeQualified(uri, workingDir);
+final FileStatus status = getFileStatus(qualifiedPath);
+PathFilter filter = new PathFilter() {
+  @Override
+  public boolean accept(Path path) {
+return status.isFile() || !path.equals(f);
+  }
+};
+FileStatusAcceptor acceptor =
+new FileStatusAcceptor.AcceptFilesOnly(qualifiedPath);
+return innerList(f, status, filter, acceptor, recursive);
+  }
+
+  @Override
+  public RemoteIterator listLocatedStatus(Path f)
+  throws IOException {
+return listLocatedStatus(f, DEFAULT_FILTER);
+  }
+
+  @Override
+  public RemoteIterator listLocatedStatus(final Path f,
+  final PathFilter filter) throws IOE

hadoop git commit: HADOOP-15024. AliyunOSS: Support user agent configuration and include that & Hadoop version information to oss server.

2017-12-08 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 fb809e05d -> abaabb5de


HADOOP-15024. AliyunOSS: Support user agent configuration and include that & 
Hadoop version information to oss server.

(cherry picked from commit c326fc89b06a8fe0978306378ba217748c7f2054)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/abaabb5d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/abaabb5d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/abaabb5d

Branch: refs/heads/branch-3.0
Commit: abaabb5deccc770bb38c933174a4e91c081a45c7
Parents: fb809e0
Author: Sammi Chen 
Authored: Fri Dec 8 21:28:19 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 8 21:28:19 2017 +0800

--
 .../apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java | 4 
 .../main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java  | 7 +++
 .../src/site/markdown/tools/hadoop-aliyun/index.md| 2 +-
 3 files changed, 12 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/abaabb5d/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index 2e8edc7..a7f13c0 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -53,6 +53,7 @@ import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.PathFilter;
 import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.util.VersionInfo;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -101,6 +102,9 @@ public class AliyunOSSFileSystemStore {
 ESTABLISH_TIMEOUT_DEFAULT));
 clientConf.setSocketTimeout(conf.getInt(SOCKET_TIMEOUT_KEY,
 SOCKET_TIMEOUT_DEFAULT));
+clientConf.setUserAgent(
+conf.get(USER_AGENT_PREFIX, USER_AGENT_PREFIX_DEFAULT) + ", Hadoop/"
++ VersionInfo.getVersion());
 
 String proxyHost = conf.getTrimmed(PROXY_HOST_KEY, "");
 int proxyPort = conf.getInt(PROXY_PORT_KEY, -1);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/abaabb5d/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
index 04a2ccd..baa171f 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.fs.aliyun.oss;
 
+import com.aliyun.oss.common.utils.VersionInfoUtils;
+
 /**
  * ALL configuration constants for OSS filesystem.
  */
@@ -26,6 +28,11 @@ public final class Constants {
   private Constants() {
   }
 
+  // User agent
+  public static final String USER_AGENT_PREFIX = "fs.oss.user.agent.prefix";
+  public static final String USER_AGENT_PREFIX_DEFAULT =
+  VersionInfoUtils.getDefaultUserAgent();
+
   // Class of credential provider
   public static final String ALIYUN_OSS_CREDENTIALS_PROVIDER_KEY =
   "fs.oss.credentials.provider";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/abaabb5d/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
index 2913279..9f24ce6 100644
--- a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -274,7 +274,7 @@ XInclude inclusion. Here is an example of 
`contract-test-options.xml`:
 
   
 fs.oss.impl
-org.apache.hadoop.fs.aliyun.AliyunOSSFileSystem
+org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15024. AliyunOSS: Support user agent configuration and include that & Hadoop version information to oss server.

2017-12-08 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0.0 1e0f7a163 -> 6e24fe299


HADOOP-15024. AliyunOSS: Support user agent configuration and include that & 
Hadoop version information to oss server.

(cherry picked from commit abaabb5deccc770bb38c933174a4e91c081a45c7)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6e24fe29
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6e24fe29
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6e24fe29

Branch: refs/heads/branch-3.0.0
Commit: 6e24fe2998734606868f2c39bc49a808f2856bc0
Parents: 1e0f7a1
Author: Sammi Chen 
Authored: Fri Dec 8 21:43:05 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 8 21:43:05 2017 +0800

--
 .../apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java | 4 
 .../main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java  | 7 +++
 .../src/site/markdown/tools/hadoop-aliyun/index.md| 2 +-
 3 files changed, 12 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e24fe29/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index 2e8edc7..a7f13c0 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -53,6 +53,7 @@ import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.PathFilter;
 import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.util.VersionInfo;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -101,6 +102,9 @@ public class AliyunOSSFileSystemStore {
 ESTABLISH_TIMEOUT_DEFAULT));
 clientConf.setSocketTimeout(conf.getInt(SOCKET_TIMEOUT_KEY,
 SOCKET_TIMEOUT_DEFAULT));
+clientConf.setUserAgent(
+conf.get(USER_AGENT_PREFIX, USER_AGENT_PREFIX_DEFAULT) + ", Hadoop/"
++ VersionInfo.getVersion());
 
 String proxyHost = conf.getTrimmed(PROXY_HOST_KEY, "");
 int proxyPort = conf.getInt(PROXY_PORT_KEY, -1);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e24fe29/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
index 04a2ccd..baa171f 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.fs.aliyun.oss;
 
+import com.aliyun.oss.common.utils.VersionInfoUtils;
+
 /**
  * ALL configuration constants for OSS filesystem.
  */
@@ -26,6 +28,11 @@ public final class Constants {
   private Constants() {
   }
 
+  // User agent
+  public static final String USER_AGENT_PREFIX = "fs.oss.user.agent.prefix";
+  public static final String USER_AGENT_PREFIX_DEFAULT =
+  VersionInfoUtils.getDefaultUserAgent();
+
   // Class of credential provider
   public static final String ALIYUN_OSS_CREDENTIALS_PROVIDER_KEY =
   "fs.oss.credentials.provider";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e24fe29/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
index 2913279..9f24ce6 100644
--- a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -274,7 +274,7 @@ XInclude inclusion. Here is an example of 
`contract-test-options.xml`:
 
   
 fs.oss.impl
-org.apache.hadoop.fs.aliyun.AliyunOSSFileSystem
+org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15024. AliyunOSS: Support user agent configuration and include that & Hadoop version information to oss server.

2017-12-08 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 96adff737 -> 94390fcd5


HADOOP-15024. AliyunOSS: Support user agent configuration and include that & 
Hadoop version information to oss server.

(cherry picked from commit 6e24fe2998734606868f2c39bc49a808f2856bc0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/94390fcd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/94390fcd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/94390fcd

Branch: refs/heads/branch-2
Commit: 94390fcd52d44aa678bfda835225f6b2ccff61b5
Parents: 96adff7
Author: Sammi Chen 
Authored: Fri Dec 8 22:02:00 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 8 22:02:00 2017 +0800

--
 .../apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java | 5 +++--
 .../main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java  | 7 +++
 2 files changed, 10 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/94390fcd/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index 85646e8..486183f 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -94,8 +94,9 @@ public class AliyunOSSFileSystemStore {
 ESTABLISH_TIMEOUT_DEFAULT));
 clientConf.setSocketTimeout(conf.getInt(SOCKET_TIMEOUT_KEY,
 SOCKET_TIMEOUT_DEFAULT));
-clientConf.setUserAgent(VersionInfo.getVersion());
-LOG.warn("Hadoop version is " + VersionInfo.getVersion());
+clientConf.setUserAgent(
+conf.get(USER_AGENT_PREFIX, USER_AGENT_PREFIX_DEFAULT) + ", Hadoop/"
++ VersionInfo.getVersion());
 
 String proxyHost = conf.getTrimmed(PROXY_HOST_KEY, "");
 int proxyPort = conf.getInt(PROXY_PORT_KEY, -1);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/94390fcd/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
index 04a2ccd..baa171f 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.fs.aliyun.oss;
 
+import com.aliyun.oss.common.utils.VersionInfoUtils;
+
 /**
  * ALL configuration constants for OSS filesystem.
  */
@@ -26,6 +28,11 @@ public final class Constants {
   private Constants() {
   }
 
+  // User agent
+  public static final String USER_AGENT_PREFIX = "fs.oss.user.agent.prefix";
+  public static final String USER_AGENT_PREFIX_DEFAULT =
+  VersionInfoUtils.getDefaultUserAgent();
+
   // Class of credential provider
   public static final String ALIYUN_OSS_CREDENTIALS_PROVIDER_KEY =
   "fs.oss.credentials.provider";


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15024. AliyunOSS: Support user agent configuration and include that & Hadoop version information to oss server.

2017-12-08 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 0f95a1d26 -> 4a064dd64


HADOOP-15024. AliyunOSS: Support user agent configuration and include that & 
Hadoop version information to oss server.

(cherry picked from commit 94390fcd52d44aa678bfda835225f6b2ccff61b5)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4a064dd6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4a064dd6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4a064dd6

Branch: refs/heads/branch-2.9
Commit: 4a064dd644482a118122fe414ca9cd9f07e3b87f
Parents: 0f95a1d
Author: Sammi Chen 
Authored: Fri Dec 8 22:37:17 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 8 22:37:17 2017 +0800

--
 .../apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java | 4 
 .../main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java  | 7 +++
 2 files changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a064dd6/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index aba3db8..cda9d56 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -47,6 +47,7 @@ import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.util.VersionInfo;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -93,6 +94,9 @@ public class AliyunOSSFileSystemStore {
 ESTABLISH_TIMEOUT_DEFAULT));
 clientConf.setSocketTimeout(conf.getInt(SOCKET_TIMEOUT_KEY,
 SOCKET_TIMEOUT_DEFAULT));
+clientConf.setUserAgent(
+conf.get(USER_AGENT_PREFIX, USER_AGENT_PREFIX_DEFAULT) + ", Hadoop/"
++ VersionInfo.getVersion());
 
 String proxyHost = conf.getTrimmed(PROXY_HOST_KEY, "");
 int proxyPort = conf.getInt(PROXY_PORT_KEY, -1);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a064dd6/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
index 04a2ccd..baa171f 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.fs.aliyun.oss;
 
+import com.aliyun.oss.common.utils.VersionInfoUtils;
+
 /**
  * ALL configuration constants for OSS filesystem.
  */
@@ -26,6 +28,11 @@ public final class Constants {
   private Constants() {
   }
 
+  // User agent
+  public static final String USER_AGENT_PREFIX = "fs.oss.user.agent.prefix";
+  public static final String USER_AGENT_PREFIX_DEFAULT =
+  VersionInfoUtils.getDefaultUserAgent();
+
   // Class of credential provider
   public static final String ALIYUN_OSS_CREDENTIALS_PROVIDER_KEY =
   "fs.oss.credentials.provider";


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15104. Aliyun OSS: change the default value of max error retry. Contributed by Jinhu Wu.

2017-12-14 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 a85036aed -> 27807e4cc


HADOOP-15104. Aliyun OSS: change the default value of max error retry. 
Contributed by Jinhu Wu.

(cherry picked from commit ce04340ec73617daff74378056a95c5d0cc0a790)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/27807e4c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/27807e4c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/27807e4c

Branch: refs/heads/branch-2
Commit: 27807e4ccb5601173d09caf82029f9ec37067b47
Parents: a85036a
Author: Sammi Chen 
Authored: Fri Dec 15 13:53:48 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 15 13:53:48 2017 +0800

--
 .../src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/27807e4c/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
index baa171f..dd71842 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
@@ -66,7 +66,7 @@ public final class Constants {
 
   // Number of times we should retry errors
   public static final String MAX_ERROR_RETRIES_KEY = "fs.oss.attempts.maximum";
-  public static final int MAX_ERROR_RETRIES_DEFAULT = 20;
+  public static final int MAX_ERROR_RETRIES_DEFAULT = 10;
 
   // Time until we give up trying to establish a connection to oss
   public static final String ESTABLISH_TIMEOUT_KEY =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15104. Aliyun OSS: change the default value of max error retry. Contributed by Jinhu Wu.

2017-12-14 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 108f8b8fa -> b2b9e0da6


HADOOP-15104. Aliyun OSS: change the default value of max error retry. 
Contributed by Jinhu Wu.

(cherry picked from commit ce04340ec73617daff74378056a95c5d0cc0a790)
(cherry picked from commit 27807e4ccb5601173d09caf82029f9ec37067b47)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b2b9e0da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b2b9e0da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b2b9e0da

Branch: refs/heads/branch-2.9
Commit: b2b9e0da6c9ffc2e974491975f9c4ff8349c3f9f
Parents: 108f8b8
Author: Sammi Chen 
Authored: Fri Dec 15 13:53:48 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 15 13:56:44 2017 +0800

--
 .../src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2b9e0da/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
index baa171f..dd71842 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
@@ -66,7 +66,7 @@ public final class Constants {
 
   // Number of times we should retry errors
   public static final String MAX_ERROR_RETRIES_KEY = "fs.oss.attempts.maximum";
-  public static final int MAX_ERROR_RETRIES_DEFAULT = 20;
+  public static final int MAX_ERROR_RETRIES_DEFAULT = 10;
 
   // Time until we give up trying to establish a connection to oss
   public static final String ESTABLISH_TIMEOUT_KEY =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15111. Aliyun OSS: backport HADOOP-14993 to branch-2. Contributed by Genmao Yu.

2017-12-14 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 24c4d2a7a -> 1fee447bf


HADOOP-15111. Aliyun OSS: backport HADOOP-14993 to branch-2. Contributed by 
Genmao Yu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1fee447b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1fee447b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1fee447b

Branch: refs/heads/branch-2
Commit: 1fee447bf2d0cda914c3972d226552ae8f4926c0
Parents: 24c4d2a
Author: Sammi Chen 
Authored: Fri Dec 15 14:13:23 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 15 14:28:39 2017 +0800

--
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  75 +--
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 107 
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../fs/aliyun/oss/FileStatusAcceptor.java   | 125 +++
 .../site/markdown/tools/hadoop-aliyun/index.md  |   6 +-
 5 files changed, 310 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fee447b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index 3561b02..21fdabf 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -28,14 +28,18 @@ import java.util.List;
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.BlockLocation;
 import org.apache.hadoop.fs.CreateFlag;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
 import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.util.Progressable;
 
@@ -46,6 +50,7 @@ import com.aliyun.oss.model.ObjectMetadata;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSUtils.objectRepresentsDirectory;
 import static org.apache.hadoop.fs.aliyun.oss.Constants.*;
 
 /**
@@ -60,6 +65,12 @@ public class AliyunOSSFileSystem extends FileSystem {
   private Path workingDir;
   private AliyunOSSFileSystemStore store;
   private int maxKeys;
+  private static final PathFilter DEFAULT_FILTER = new PathFilter() {
+@Override
+public boolean accept(Path file) {
+  return true;
+}
+  };
 
   @Override
   public FSDataOutputStream append(Path path, int bufferSize,
@@ -302,18 +313,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
 
   /**
-   * Check if OSS object represents a directory.
-   *
-   * @param name object key
-   * @param size object content length
-   * @return true if object represents a directory
-   */
-  private boolean objectRepresentsDirectory(final String name,
-  final long size) {
-return StringUtils.isNotEmpty(name) && name.endsWith("/") && size == 0L;
-  }
-
-  /**
* Turn a path (relative or otherwise) into an OSS key.
*
* @param path the path of the file.
@@ -332,6 +331,58 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
 
   @Override
+  public RemoteIterator listFiles(
+  final Path f, final boolean recursive) throws IOException {
+Path qualifiedPath = f.makeQualified(uri, workingDir);
+final FileStatus status = getFileStatus(qualifiedPath);
+PathFilter filter = new PathFilter() {
+  @Override
+  public boolean accept(Path path) {
+return status.isFile() || !path.equals(f);
+  }
+};
+FileStatusAcceptor acceptor =
+new FileStatusAcceptor.AcceptFilesOnly(qualifiedPath);
+return innerList(f, status, filter, acceptor, recursive);
+  }
+
+  @Override
+  public RemoteIterator listLocatedStatus(Path f)
+throws IOException {
+return listLocatedStatus(f, DEFAULT_FILTER);
+  }
+
+  @Override
+  public RemoteIterator listLocatedStatus(final Path f,
+  final PathFilter filter) throws IOException {
+Path qualifiedPath = f.makeQualified(uri, workingDir);
+final FileStatus status = getFileStatus(qualifiedPath);
+Fil

hadoop git commit: HADOOP-15111. Aliyun OSS: backport HADOOP-14993 to branch-2. Contributed by Genmao Yu.

2017-12-14 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 c629f3376 -> 6af3ea58c


HADOOP-15111. Aliyun OSS: backport HADOOP-14993 to branch-2. Contributed by 
Genmao Yu.

(cherry picked from commit 1fee447bf2d0cda914c3972d226552ae8f4926c0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6af3ea58
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6af3ea58
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6af3ea58

Branch: refs/heads/branch-2.9
Commit: 6af3ea58c980e24942cf00de8fb8ec6c9e73135d
Parents: c629f33
Author: Sammi Chen 
Authored: Fri Dec 15 14:13:23 2017 +0800
Committer: Sammi Chen 
Committed: Fri Dec 15 14:45:32 2017 +0800

--
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  75 +--
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 107 
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|  12 ++
 .../fs/aliyun/oss/FileStatusAcceptor.java   | 125 +++
 .../site/markdown/tools/hadoop-aliyun/index.md  |   6 +-
 5 files changed, 310 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6af3ea58/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index 3561b02..21fdabf 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -28,14 +28,18 @@ import java.util.List;
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.BlockLocation;
 import org.apache.hadoop.fs.CreateFlag;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
 import org.apache.hadoop.fs.PathIOException;
+import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.util.Progressable;
 
@@ -46,6 +50,7 @@ import com.aliyun.oss.model.ObjectMetadata;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSUtils.objectRepresentsDirectory;
 import static org.apache.hadoop.fs.aliyun.oss.Constants.*;
 
 /**
@@ -60,6 +65,12 @@ public class AliyunOSSFileSystem extends FileSystem {
   private Path workingDir;
   private AliyunOSSFileSystemStore store;
   private int maxKeys;
+  private static final PathFilter DEFAULT_FILTER = new PathFilter() {
+@Override
+public boolean accept(Path file) {
+  return true;
+}
+  };
 
   @Override
   public FSDataOutputStream append(Path path, int bufferSize,
@@ -302,18 +313,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
 
   /**
-   * Check if OSS object represents a directory.
-   *
-   * @param name object key
-   * @param size object content length
-   * @return true if object represents a directory
-   */
-  private boolean objectRepresentsDirectory(final String name,
-  final long size) {
-return StringUtils.isNotEmpty(name) && name.endsWith("/") && size == 0L;
-  }
-
-  /**
* Turn a path (relative or otherwise) into an OSS key.
*
* @param path the path of the file.
@@ -332,6 +331,58 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
 
   @Override
+  public RemoteIterator listFiles(
+  final Path f, final boolean recursive) throws IOException {
+Path qualifiedPath = f.makeQualified(uri, workingDir);
+final FileStatus status = getFileStatus(qualifiedPath);
+PathFilter filter = new PathFilter() {
+  @Override
+  public boolean accept(Path path) {
+return status.isFile() || !path.equals(f);
+  }
+};
+FileStatusAcceptor acceptor =
+new FileStatusAcceptor.AcceptFilesOnly(qualifiedPath);
+return innerList(f, status, filter, acceptor, recursive);
+  }
+
+  @Override
+  public RemoteIterator listLocatedStatus(Path f)
+throws IOException {
+return listLocatedStatus(f, DEFAULT_FILTER);
+  }
+
+  @Override
+  public RemoteIterator listLocatedStatus(final Path f,
+  final PathFilter filter) throws IOException {
+Path qualifiedPath = f.makeQualified(uri, workingD

hadoop git commit: HADOOP-15868. AliyunOSS: update document for properties of multiple part download, multiple part upload and directory copy. Contributed by Jinhu Wu.

2018-10-26 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk 38a65e3b7 -> 7574d1853


HADOOP-15868. AliyunOSS: update document for properties of multiple part 
download, multiple part upload and directory copy. Contributed by Jinhu Wu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7574d185
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7574d185
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7574d185

Branch: refs/heads/trunk
Commit: 7574d18538e838f40581519080d7c8621c65e53b
Parents: 38a65e3
Author: Sammi Chen 
Authored: Fri Oct 26 15:19:56 2018 +0800
Committer: Sammi Chen 
Committed: Fri Oct 26 15:19:56 2018 +0800

--
 .../site/markdown/tools/hadoop-aliyun/index.md  | 36 
 1 file changed, 36 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7574d185/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
index 0703790..0c3131d 100644
--- a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -229,6 +229,42 @@ please raise your issues with them.
 
 
 
+  fs.oss.upload.active.blocks
+  4
+  Active(Concurrent) upload blocks when uploading a 
file.
+
+
+
+  fs.oss.multipart.download.threads
+  10
+  The maximum number of threads allowed in the pool for 
multipart download and upload.
+
+
+
+  fs.oss.multipart.download.ahead.part.max.number
+  4
+  The maximum number of read ahead parts when reading a 
file.
+
+
+
+  fs.oss.max.total.tasks
+  128
+  The maximum queue number for multipart download and 
upload.
+
+
+
+  fs.oss.max.copy.threads
+  25
+  The maximum number of threads allowed in the pool for copy 
operations.
+
+
+
+  fs.oss.max.copy.tasks.per.dir
+  5
+  The maximum number of concurrent tasks allowed when copying 
a directory.
+
+
+
   fs.oss.multipart.upload.threshold
   20971520
   Minimum size in bytes before we start a multipart uploads 
or copy.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15868. AliyunOSS: update document for properties of multiple part download, multiple part upload and directory copy. Contributed by Jinhu Wu.

2018-10-26 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 8b4f9b3e2 -> 366541d83


HADOOP-15868. AliyunOSS: update document for properties of multiple part 
download, multiple part upload and directory copy. Contributed by Jinhu Wu.

(cherry picked from commit 7574d18538e838f40581519080d7c8621c65e53b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/366541d8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/366541d8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/366541d8

Branch: refs/heads/branch-3.1
Commit: 366541d834f70fd6f8d4c5296a9e844236c6fd74
Parents: 8b4f9b3
Author: Sammi Chen 
Authored: Fri Oct 26 15:19:56 2018 +0800
Committer: Sammi Chen 
Committed: Fri Oct 26 15:28:20 2018 +0800

--
 .../site/markdown/tools/hadoop-aliyun/index.md  | 36 
 1 file changed, 36 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/366541d8/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
index 0703790..0c3131d 100644
--- a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -229,6 +229,42 @@ please raise your issues with them.
 
 
 
+  fs.oss.upload.active.blocks
+  4
+  Active(Concurrent) upload blocks when uploading a 
file.
+
+
+
+  fs.oss.multipart.download.threads
+  10
+  The maximum number of threads allowed in the pool for 
multipart download and upload.
+
+
+
+  fs.oss.multipart.download.ahead.part.max.number
+  4
+  The maximum number of read ahead parts when reading a 
file.
+
+
+
+  fs.oss.max.total.tasks
+  128
+  The maximum queue number for multipart download and 
upload.
+
+
+
+  fs.oss.max.copy.threads
+  25
+  The maximum number of threads allowed in the pool for copy 
operations.
+
+
+
+  fs.oss.max.copy.tasks.per.dir
+  5
+  The maximum number of concurrent tasks allowed when copying 
a directory.
+
+
+
   fs.oss.multipart.upload.threshold
   20971520
   Minimum size in bytes before we start a multipart uploads 
or copy.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15868. AliyunOSS: update document for properties of multiple part download, multiple part upload and directory copy. Contributed by Jinhu Wu.

2018-10-26 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 f6469adbb -> c5a227062


HADOOP-15868. AliyunOSS: update document for properties of multiple part 
download, multiple part upload and directory copy. Contributed by Jinhu Wu.

(cherry picked from commit 7574d18538e838f40581519080d7c8621c65e53b)
(cherry picked from commit 366541d834f70fd6f8d4c5296a9e844236c6fd74)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c5a22706
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c5a22706
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c5a22706

Branch: refs/heads/branch-3.0
Commit: c5a227062fae33ce3137161802d6f090a6af010e
Parents: f6469ad
Author: Sammi Chen 
Authored: Fri Oct 26 15:19:56 2018 +0800
Committer: Sammi Chen 
Committed: Fri Oct 26 15:30:06 2018 +0800

--
 .../site/markdown/tools/hadoop-aliyun/index.md  | 36 
 1 file changed, 36 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5a22706/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
index 0703790..0c3131d 100644
--- a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -229,6 +229,42 @@ please raise your issues with them.
 
 
 
+  fs.oss.upload.active.blocks
+  4
+  Active(Concurrent) upload blocks when uploading a 
file.
+
+
+
+  fs.oss.multipart.download.threads
+  10
+  The maximum number of threads allowed in the pool for 
multipart download and upload.
+
+
+
+  fs.oss.multipart.download.ahead.part.max.number
+  4
+  The maximum number of read ahead parts when reading a 
file.
+
+
+
+  fs.oss.max.total.tasks
+  128
+  The maximum queue number for multipart download and 
upload.
+
+
+
+  fs.oss.max.copy.threads
+  25
+  The maximum number of threads allowed in the pool for copy 
operations.
+
+
+
+  fs.oss.max.copy.tasks.per.dir
+  5
+  The maximum number of concurrent tasks allowed when copying 
a directory.
+
+
+
   fs.oss.multipart.upload.threshold
   20971520
   Minimum size in bytes before we start a multipart uploads 
or copy.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15868. AliyunOSS: update document for properties of multiple part download, multiple part upload and directory copy. Contributed by Jinhu Wu.

2018-10-26 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 642b613a7 -> ca22bf175


HADOOP-15868. AliyunOSS: update document for properties of multiple part 
download, multiple part upload and directory copy. Contributed by Jinhu Wu.

(cherry picked from commit 7574d18538e838f40581519080d7c8621c65e53b)
(cherry picked from commit 366541d834f70fd6f8d4c5296a9e844236c6fd74)
(cherry picked from commit c5a227062fae33ce3137161802d6f090a6af010e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ca22bf17
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ca22bf17
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ca22bf17

Branch: refs/heads/branch-3.2
Commit: ca22bf175f91980f78dedf1cd27f114acd56fa50
Parents: 642b613
Author: Sammi Chen 
Authored: Fri Oct 26 15:19:56 2018 +0800
Committer: Sammi Chen 
Committed: Fri Oct 26 15:33:05 2018 +0800

--
 .../site/markdown/tools/hadoop-aliyun/index.md  | 36 
 1 file changed, 36 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca22bf17/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
index 0703790..0c3131d 100644
--- a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -229,6 +229,42 @@ please raise your issues with them.
 
 
 
+  fs.oss.upload.active.blocks
+  4
+  Active(Concurrent) upload blocks when uploading a 
file.
+
+
+
+  fs.oss.multipart.download.threads
+  10
+  The maximum number of threads allowed in the pool for 
multipart download and upload.
+
+
+
+  fs.oss.multipart.download.ahead.part.max.number
+  4
+  The maximum number of read ahead parts when reading a 
file.
+
+
+
+  fs.oss.max.total.tasks
+  128
+  The maximum queue number for multipart download and 
upload.
+
+
+
+  fs.oss.max.copy.threads
+  25
+  The maximum number of threads allowed in the pool for copy 
operations.
+
+
+
+  fs.oss.max.copy.tasks.per.dir
+  5
+  The maximum number of concurrent tasks allowed when copying 
a directory.
+
+
+
   fs.oss.multipart.upload.threshold
   20971520
   Minimum size in bytes before we start a multipart uploads 
or copy.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15868. AliyunOSS: update document for properties of multiple part download, multiple part upload and directory copy. Contributed by Jinhu Wu.

2018-10-26 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 f2739f3f5 -> a0aa61014


HADOOP-15868. AliyunOSS: update document for properties of multiple part 
download, multiple part upload and directory copy. Contributed by Jinhu Wu.

(cherry picked from commit 7574d18538e838f40581519080d7c8621c65e53b)
(cherry picked from commit 366541d834f70fd6f8d4c5296a9e844236c6fd74)
(cherry picked from commit c5a227062fae33ce3137161802d6f090a6af010e)
(cherry picked from commit ca22bf175f91980f78dedf1cd27f114acd56fa50)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a0aa6101
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a0aa6101
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a0aa6101

Branch: refs/heads/branch-2
Commit: a0aa610143fd32050e5e862b130d582d132d8fbb
Parents: f2739f3
Author: Sammi Chen 
Authored: Fri Oct 26 15:19:56 2018 +0800
Committer: Sammi Chen 
Committed: Fri Oct 26 15:35:30 2018 +0800

--
 .../site/markdown/tools/hadoop-aliyun/index.md  | 36 
 1 file changed, 36 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0aa6101/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
index 0703790..0c3131d 100644
--- a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -229,6 +229,42 @@ please raise your issues with them.
 
 
 
+  fs.oss.upload.active.blocks
+  4
+  Active(Concurrent) upload blocks when uploading a 
file.
+
+
+
+  fs.oss.multipart.download.threads
+  10
+  The maximum number of threads allowed in the pool for 
multipart download and upload.
+
+
+
+  fs.oss.multipart.download.ahead.part.max.number
+  4
+  The maximum number of read ahead parts when reading a 
file.
+
+
+
+  fs.oss.max.total.tasks
+  128
+  The maximum queue number for multipart download and 
upload.
+
+
+
+  fs.oss.max.copy.threads
+  25
+  The maximum number of threads allowed in the pool for copy 
operations.
+
+
+
+  fs.oss.max.copy.tasks.per.dir
+  5
+  The maximum number of concurrent tasks allowed when copying 
a directory.
+
+
+
   fs.oss.multipart.upload.threshold
   20971520
   Minimum size in bytes before we start a multipart uploads 
or copy.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15868. AliyunOSS: update document for properties of multiple part download, multiple part upload and directory copy. Contributed by Jinhu Wu.

2018-10-26 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 095ad6f7f -> 94d797753


HADOOP-15868. AliyunOSS: update document for properties of multiple part 
download, multiple part upload and directory copy. Contributed by Jinhu Wu.

(cherry picked from commit 7574d18538e838f40581519080d7c8621c65e53b)
(cherry picked from commit 366541d834f70fd6f8d4c5296a9e844236c6fd74)
(cherry picked from commit c5a227062fae33ce3137161802d6f090a6af010e)
(cherry picked from commit ca22bf175f91980f78dedf1cd27f114acd56fa50)
(cherry picked from commit a0aa610143fd32050e5e862b130d582d132d8fbb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/94d79775
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/94d79775
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/94d79775

Branch: refs/heads/branch-2.9
Commit: 94d797753789062acfa13a468fc9d6527b94df78
Parents: 095ad6f
Author: Sammi Chen 
Authored: Fri Oct 26 15:19:56 2018 +0800
Committer: Sammi Chen 
Committed: Fri Oct 26 15:37:08 2018 +0800

--
 .../site/markdown/tools/hadoop-aliyun/index.md  | 36 
 1 file changed, 36 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/94d79775/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md 
b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
index 0703790..0c3131d 100644
--- a/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
+++ b/hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md
@@ -229,6 +229,42 @@ please raise your issues with them.
 
 
 
+  fs.oss.upload.active.blocks
+  4
+  Active(Concurrent) upload blocks when uploading a 
file.
+
+
+
+  fs.oss.multipart.download.threads
+  10
+  The maximum number of threads allowed in the pool for 
multipart download and upload.
+
+
+
+  fs.oss.multipart.download.ahead.part.max.number
+  4
+  The maximum number of read ahead parts when reading a 
file.
+
+
+
+  fs.oss.max.total.tasks
+  128
+  The maximum queue number for multipart download and 
upload.
+
+
+
+  fs.oss.max.copy.threads
+  25
+  The maximum number of threads allowed in the pool for copy 
operations.
+
+
+
+  fs.oss.max.copy.tasks.per.dir
+  5
+  The maximum number of concurrent tasks allowed when copying 
a directory.
+
+
+
   fs.oss.multipart.upload.threshold
   20971520
   Minimum size in bytes before we start a multipart uploads 
or copy.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. Contributed by Jinhu Wu.

2018-11-13 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk a13be203b -> 3fade865c


HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. 
Contributed by Jinhu Wu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3fade865
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3fade865
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3fade865

Branch: refs/heads/trunk
Commit: 3fade865ce84dcf68bcd7de5a5ed1c7d904796e9
Parents: a13be20
Author: Sammi Chen 
Authored: Wed Nov 14 12:58:57 2018 +0800
Committer: Sammi Chen 
Committed: Wed Nov 14 12:58:57 2018 +0800

--
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  4 --
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 22 --
 .../site/markdown/tools/hadoop-aliyun/index.md  |  5 ++
 .../oss/TestAliyunOSSBlockOutputStream.java | 70 +---
 4 files changed, 83 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3fade865/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index 4fbb6fb..9c4435c 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -405,7 +405,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 
   ObjectListing objects = store.listObjects(key, maxKeys, null, false);
   while (true) {
-statistics.incrementReadOps(1);
 for (OSSObjectSummary objectSummary : objects.getObjectSummaries()) {
   String objKey = objectSummary.getKey();
   if (objKey.equals(key + "/")) {
@@ -446,7 +445,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
   String nextMarker = objects.getNextMarker();
   objects = store.listObjects(key, maxKeys, nextMarker, false);
-  statistics.incrementReadOps(1);
 } else {
   break;
 }
@@ -694,7 +692,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 new SemaphoredDelegatingExecutor(boundedCopyThreadPool,
 maxConcurrentCopyTasksPerDir, true));
 ObjectListing objects = store.listObjects(srcKey, maxKeys, null, true);
-statistics.incrementReadOps(1);
 // Copy files from src folder to dst
 int copiesToFinish = 0;
 while (true) {
@@ -717,7 +714,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   if (objects.isTruncated()) {
 String nextMarker = objects.getNextMarker();
 objects = store.listObjects(srcKey, maxKeys, nextMarker, true);
-statistics.incrementReadOps(1);
   } else {
 break;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3fade865/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index 7639eb3..4fc1325 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -175,6 +175,7 @@ public class AliyunOSSFileSystemStore {
   CannedAccessControlList cannedACL =
   CannedAccessControlList.valueOf(cannedACLName);
   ossClient.setBucketAcl(bucketName, cannedACL);
+  statistics.incrementWriteOps(1);
 }
 
 maxKeys = conf.getInt(MAX_PAGING_KEYS_KEY, MAX_PAGING_KEYS_DEFAULT);
@@ -216,6 +217,7 @@ public class AliyunOSSFileSystemStore {
   // Here, we choose the simple mode to do batch delete.
   deleteRequest.setQuiet(true);
   DeleteObjectsResult result = ossClient.deleteObjects(deleteRequest);
+  statistics.incrementWriteOps(1);
   deleteFailed = result.getDeletedObjects();
   tries++;
   if (tries == retry) {
@@ -268,11 +270,13 @@ public class AliyunOSSFileSystemStore {
*/
   public ObjectMetadata getObjectMetadata(String key) {
 try {
-  return ossClient.getObjectMetadata(bucketName, key);
+  ObjectMetadata objectMeta = ossClient.getObjectMetadata(bucketName, key);
+  statistics.incrementReadOps(1);
+  return objectMeta;
 } catch (OSSException osse) {

hadoop git commit: HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. Contributed by Jinhu Wu.

2018-11-13 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 04bba9158 -> 64cb97fb4


HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. 
Contributed by Jinhu Wu.

(cherry picked from commit 3fade865ce84dcf68bcd7de5a5ed1c7d904796e9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/64cb97fb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/64cb97fb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/64cb97fb

Branch: refs/heads/branch-3.0
Commit: 64cb97fb4467513f73fde18f96f391ad34e3bb0a
Parents: 04bba91
Author: Sammi Chen 
Authored: Wed Nov 14 12:58:57 2018 +0800
Committer: Sammi Chen 
Committed: Wed Nov 14 13:09:11 2018 +0800

--
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  4 --
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 22 --
 .../site/markdown/tools/hadoop-aliyun/index.md  |  5 ++
 .../oss/TestAliyunOSSBlockOutputStream.java | 70 +---
 4 files changed, 83 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/64cb97fb/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index 93e31d5..d7061e5 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -405,7 +405,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 
   ObjectListing objects = store.listObjects(key, maxKeys, null, false);
   while (true) {
-statistics.incrementReadOps(1);
 for (OSSObjectSummary objectSummary : objects.getObjectSummaries()) {
   String objKey = objectSummary.getKey();
   if (objKey.equals(key + "/")) {
@@ -446,7 +445,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
   String nextMarker = objects.getNextMarker();
   objects = store.listObjects(key, maxKeys, nextMarker, false);
-  statistics.incrementReadOps(1);
 } else {
   break;
 }
@@ -694,7 +692,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 new SemaphoredDelegatingExecutor(boundedCopyThreadPool,
 maxConcurrentCopyTasksPerDir, true));
 ObjectListing objects = store.listObjects(srcKey, maxKeys, null, true);
-statistics.incrementReadOps(1);
 // Copy files from src folder to dst
 int copiesToFinish = 0;
 while (true) {
@@ -717,7 +714,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   if (objects.isTruncated()) {
 String nextMarker = objects.getNextMarker();
 objects = store.listObjects(srcKey, maxKeys, nextMarker, true);
-statistics.incrementReadOps(1);
   } else {
 break;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/64cb97fb/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index 0f418d7..646cd25 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -175,6 +175,7 @@ public class AliyunOSSFileSystemStore {
   CannedAccessControlList cannedACL =
   CannedAccessControlList.valueOf(cannedACLName);
   ossClient.setBucketAcl(bucketName, cannedACL);
+  statistics.incrementWriteOps(1);
 }
 
 maxKeys = conf.getInt(MAX_PAGING_KEYS_KEY, MAX_PAGING_KEYS_DEFAULT);
@@ -216,6 +217,7 @@ public class AliyunOSSFileSystemStore {
   // Here, we choose the simple mode to do batch delete.
   deleteRequest.setQuiet(true);
   DeleteObjectsResult result = ossClient.deleteObjects(deleteRequest);
+  statistics.incrementWriteOps(1);
   deleteFailed = result.getDeletedObjects();
   tries++;
   if (tries == retry) {
@@ -268,11 +270,13 @@ public class AliyunOSSFileSystemStore {
*/
   public ObjectMetadata getObjectMetadata(String key) {
 try {
-  return ossClient.getObjectMetadata(bucketName, key);
+  ObjectMetadata objectMeta = ossClient.getObjectMetadata(bucketName, key);
+  statistics.in

hadoop git commit: HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. Contributed by Jinhu Wu.

2018-11-13 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 8ab6aa1b4 -> 5d532cfc6


HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. 
Contributed by Jinhu Wu.

(cherry picked from commit 3fade865ce84dcf68bcd7de5a5ed1c7d904796e9)
(cherry picked from commit 64cb97fb4467513f73fde18f96f391ad34e3bb0a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5d532cfc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5d532cfc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5d532cfc

Branch: refs/heads/branch-3.1
Commit: 5d532cfc6f23f942ed10edab55ed251eb99a0664
Parents: 8ab6aa1
Author: Sammi Chen 
Authored: Wed Nov 14 12:58:57 2018 +0800
Committer: Sammi Chen 
Committed: Wed Nov 14 13:12:22 2018 +0800

--
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  4 --
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 22 --
 .../site/markdown/tools/hadoop-aliyun/index.md  |  5 ++
 .../oss/TestAliyunOSSBlockOutputStream.java | 70 +---
 4 files changed, 83 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5d532cfc/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index 93e31d5..d7061e5 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -405,7 +405,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 
   ObjectListing objects = store.listObjects(key, maxKeys, null, false);
   while (true) {
-statistics.incrementReadOps(1);
 for (OSSObjectSummary objectSummary : objects.getObjectSummaries()) {
   String objKey = objectSummary.getKey();
   if (objKey.equals(key + "/")) {
@@ -446,7 +445,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
   String nextMarker = objects.getNextMarker();
   objects = store.listObjects(key, maxKeys, nextMarker, false);
-  statistics.incrementReadOps(1);
 } else {
   break;
 }
@@ -694,7 +692,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 new SemaphoredDelegatingExecutor(boundedCopyThreadPool,
 maxConcurrentCopyTasksPerDir, true));
 ObjectListing objects = store.listObjects(srcKey, maxKeys, null, true);
-statistics.incrementReadOps(1);
 // Copy files from src folder to dst
 int copiesToFinish = 0;
 while (true) {
@@ -717,7 +714,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   if (objects.isTruncated()) {
 String nextMarker = objects.getNextMarker();
 objects = store.listObjects(srcKey, maxKeys, nextMarker, true);
-statistics.incrementReadOps(1);
   } else {
 break;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5d532cfc/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index 0f418d7..646cd25 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -175,6 +175,7 @@ public class AliyunOSSFileSystemStore {
   CannedAccessControlList cannedACL =
   CannedAccessControlList.valueOf(cannedACLName);
   ossClient.setBucketAcl(bucketName, cannedACL);
+  statistics.incrementWriteOps(1);
 }
 
 maxKeys = conf.getInt(MAX_PAGING_KEYS_KEY, MAX_PAGING_KEYS_DEFAULT);
@@ -216,6 +217,7 @@ public class AliyunOSSFileSystemStore {
   // Here, we choose the simple mode to do batch delete.
   deleteRequest.setQuiet(true);
   DeleteObjectsResult result = ossClient.deleteObjects(deleteRequest);
+  statistics.incrementWriteOps(1);
   deleteFailed = result.getDeletedObjects();
   tries++;
   if (tries == retry) {
@@ -268,11 +270,13 @@ public class AliyunOSSFileSystemStore {
*/
   public ObjectMetadata getObjectMetadata(String key) {
 try {
-  return ossClient.getObjectMetadata(bucketName, key);
+  ObjectMetadata objectMeta

hadoop git commit: HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. Contributed by Jinhu Wu.

2018-11-13 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 403984051 -> 37082a664


HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. 
Contributed by Jinhu Wu.

(cherry picked from commit 3fade865ce84dcf68bcd7de5a5ed1c7d904796e9)
(cherry picked from commit 64cb97fb4467513f73fde18f96f391ad34e3bb0a)
(cherry picked from commit 5d532cfc6f23f942ed10edab55ed251eb99a0664)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/37082a66
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/37082a66
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/37082a66

Branch: refs/heads/branch-3.2
Commit: 37082a664aaf99bc40522a8dfa231d71792dd976
Parents: 4039840
Author: Sammi Chen 
Authored: Wed Nov 14 12:58:57 2018 +0800
Committer: Sammi Chen 
Committed: Wed Nov 14 13:48:51 2018 +0800

--
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  4 --
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 22 --
 .../site/markdown/tools/hadoop-aliyun/index.md  |  5 ++
 .../oss/TestAliyunOSSBlockOutputStream.java | 70 +---
 4 files changed, 83 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/37082a66/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index 4fbb6fb..9c4435c 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -405,7 +405,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 
   ObjectListing objects = store.listObjects(key, maxKeys, null, false);
   while (true) {
-statistics.incrementReadOps(1);
 for (OSSObjectSummary objectSummary : objects.getObjectSummaries()) {
   String objKey = objectSummary.getKey();
   if (objKey.equals(key + "/")) {
@@ -446,7 +445,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
   String nextMarker = objects.getNextMarker();
   objects = store.listObjects(key, maxKeys, nextMarker, false);
-  statistics.incrementReadOps(1);
 } else {
   break;
 }
@@ -694,7 +692,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 new SemaphoredDelegatingExecutor(boundedCopyThreadPool,
 maxConcurrentCopyTasksPerDir, true));
 ObjectListing objects = store.listObjects(srcKey, maxKeys, null, true);
-statistics.incrementReadOps(1);
 // Copy files from src folder to dst
 int copiesToFinish = 0;
 while (true) {
@@ -717,7 +714,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   if (objects.isTruncated()) {
 String nextMarker = objects.getNextMarker();
 objects = store.listObjects(srcKey, maxKeys, nextMarker, true);
-statistics.incrementReadOps(1);
   } else {
 break;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/37082a66/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index 7639eb3..4fc1325 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -175,6 +175,7 @@ public class AliyunOSSFileSystemStore {
   CannedAccessControlList cannedACL =
   CannedAccessControlList.valueOf(cannedACLName);
   ossClient.setBucketAcl(bucketName, cannedACL);
+  statistics.incrementWriteOps(1);
 }
 
 maxKeys = conf.getInt(MAX_PAGING_KEYS_KEY, MAX_PAGING_KEYS_DEFAULT);
@@ -216,6 +217,7 @@ public class AliyunOSSFileSystemStore {
   // Here, we choose the simple mode to do batch delete.
   deleteRequest.setQuiet(true);
   DeleteObjectsResult result = ossClient.deleteObjects(deleteRequest);
+  statistics.incrementWriteOps(1);
   deleteFailed = result.getDeletedObjects();
   tries++;
   if (tries == retry) {
@@ -268,11 +270,13 @@ public class AliyunOSSFileSystemStore {
*/
   public ObjectMetadata getObjectMetadata(String key) {
 try {
-  return ossClient

hadoop git commit: HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. Contributed by Jinhu Wu.

2018-11-13 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 a86b66534 -> 3aac324a0


HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. 
Contributed by Jinhu Wu.

(cherry picked from commit 3fade865ce84dcf68bcd7de5a5ed1c7d904796e9)
(cherry picked from commit 64cb97fb4467513f73fde18f96f391ad34e3bb0a)
(cherry picked from commit 5d532cfc6f23f942ed10edab55ed251eb99a0664)
(cherry picked from commit 37082a664aaf99bc40522a8dfa231d71792dd976)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3aac324a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3aac324a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3aac324a

Branch: refs/heads/branch-2
Commit: 3aac324a0760b097f7d91139a2352b13236461f7
Parents: a86b665
Author: Sammi Chen 
Authored: Wed Nov 14 12:58:57 2018 +0800
Committer: Sammi Chen 
Committed: Wed Nov 14 13:53:53 2018 +0800

--
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  4 --
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 22 --
 .../site/markdown/tools/hadoop-aliyun/index.md  |  5 ++
 .../oss/TestAliyunOSSBlockOutputStream.java | 70 +---
 4 files changed, 83 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3aac324a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index 7356818..809c8c8 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -457,7 +457,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 
   ObjectListing objects = store.listObjects(key, maxKeys, null, false);
   while (true) {
-statistics.incrementReadOps(1);
 for (OSSObjectSummary objectSummary : objects.getObjectSummaries()) {
   String objKey = objectSummary.getKey();
   if (objKey.equals(key + "/")) {
@@ -498,7 +497,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
   String nextMarker = objects.getNextMarker();
   objects = store.listObjects(key, maxKeys, nextMarker, false);
-  statistics.incrementReadOps(1);
 } else {
   break;
 }
@@ -694,7 +692,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 new SemaphoredDelegatingExecutor(boundedCopyThreadPool,
 maxConcurrentCopyTasksPerDir, true));
 ObjectListing objects = store.listObjects(srcKey, maxKeys, null, true);
-statistics.incrementReadOps(1);
 // Copy files from src folder to dst
 int copiesToFinish = 0;
 while (true) {
@@ -717,7 +714,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   if (objects.isTruncated()) {
 String nextMarker = objects.getNextMarker();
 objects = store.listObjects(srcKey, maxKeys, nextMarker, true);
-statistics.incrementReadOps(1);
   } else {
 break;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3aac324a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index f13ac32..f0413e3 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -175,6 +175,7 @@ public class AliyunOSSFileSystemStore {
   CannedAccessControlList cannedACL =
   CannedAccessControlList.valueOf(cannedACLName);
   ossClient.setBucketAcl(bucketName, cannedACL);
+  statistics.incrementWriteOps(1);
 }
 
 maxKeys = conf.getInt(MAX_PAGING_KEYS_KEY, MAX_PAGING_KEYS_DEFAULT);
@@ -216,6 +217,7 @@ public class AliyunOSSFileSystemStore {
   // Here, we choose the simple mode to do batch delete.
   deleteRequest.setQuiet(true);
   DeleteObjectsResult result = ossClient.deleteObjects(deleteRequest);
+  statistics.incrementWriteOps(1);
   deleteFailed = result.getDeletedObjects();
   tries++;
   if (tries == retry) {
@@ -268,11 +270,13 @@ public class AliyunOSSFileSystemStore {
*/
   public ObjectMetadata g

hadoop git commit: HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. Contributed by Jinhu Wu.

2018-11-13 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 5884b1c28 -> ef085e088


HADOOP-15917. AliyunOSS: fix incorrect ReadOps and WriteOps in statistics. 
Contributed by Jinhu Wu.

(cherry picked from commit 3fade865ce84dcf68bcd7de5a5ed1c7d904796e9)
(cherry picked from commit 64cb97fb4467513f73fde18f96f391ad34e3bb0a)
(cherry picked from commit 5d532cfc6f23f942ed10edab55ed251eb99a0664)
(cherry picked from commit 37082a664aaf99bc40522a8dfa231d71792dd976)
(cherry picked from commit 3aac324a0760b097f7d91139a2352b13236461f7)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ef085e08
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ef085e08
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ef085e08

Branch: refs/heads/branch-2.9
Commit: ef085e0880a3309668603e4acb48b7f9dbe9e6ce
Parents: 5884b1c
Author: Sammi Chen 
Authored: Wed Nov 14 12:58:57 2018 +0800
Committer: Sammi Chen 
Committed: Wed Nov 14 13:57:58 2018 +0800

--
 .../fs/aliyun/oss/AliyunOSSFileSystem.java  |  4 --
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java | 22 --
 .../site/markdown/tools/hadoop-aliyun/index.md  |  5 ++
 .../oss/TestAliyunOSSBlockOutputStream.java | 70 +---
 4 files changed, 83 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef085e08/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
index 7356818..809c8c8 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
@@ -457,7 +457,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 
   ObjectListing objects = store.listObjects(key, maxKeys, null, false);
   while (true) {
-statistics.incrementReadOps(1);
 for (OSSObjectSummary objectSummary : objects.getObjectSummaries()) {
   String objKey = objectSummary.getKey();
   if (objKey.equals(key + "/")) {
@@ -498,7 +497,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   }
   String nextMarker = objects.getNextMarker();
   objects = store.listObjects(key, maxKeys, nextMarker, false);
-  statistics.incrementReadOps(1);
 } else {
   break;
 }
@@ -694,7 +692,6 @@ public class AliyunOSSFileSystem extends FileSystem {
 new SemaphoredDelegatingExecutor(boundedCopyThreadPool,
 maxConcurrentCopyTasksPerDir, true));
 ObjectListing objects = store.listObjects(srcKey, maxKeys, null, true);
-statistics.incrementReadOps(1);
 // Copy files from src folder to dst
 int copiesToFinish = 0;
 while (true) {
@@ -717,7 +714,6 @@ public class AliyunOSSFileSystem extends FileSystem {
   if (objects.isTruncated()) {
 String nextMarker = objects.getNextMarker();
 objects = store.listObjects(srcKey, maxKeys, nextMarker, true);
-statistics.incrementReadOps(1);
   } else {
 break;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef085e08/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index f13ac32..f0413e3 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -175,6 +175,7 @@ public class AliyunOSSFileSystemStore {
   CannedAccessControlList cannedACL =
   CannedAccessControlList.valueOf(cannedACLName);
   ossClient.setBucketAcl(bucketName, cannedACL);
+  statistics.incrementWriteOps(1);
 }
 
 maxKeys = conf.getInt(MAX_PAGING_KEYS_KEY, MAX_PAGING_KEYS_DEFAULT);
@@ -216,6 +217,7 @@ public class AliyunOSSFileSystemStore {
   // Here, we choose the simple mode to do batch delete.
   deleteRequest.setQuiet(true);
   DeleteObjectsResult result = ossClient.deleteObjects(deleteRequest);
+  statistics.incrementWriteOps(1);
   deleteFailed = result.getDeletedObjects();
   tries++;
   if (tries == retry) {
@@ -268,11 +270,13 @@ 

hadoop git commit: HADOOP-15759. AliyunOSS: Update oss-sdk version to 3.0.0. Contributed by Jinhu Wu.

2018-09-18 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk f796cfde7 -> e4fca6aae


HADOOP-15759. AliyunOSS: Update oss-sdk version to 3.0.0. Contributed by Jinhu 
Wu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e4fca6aa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e4fca6aa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e4fca6aa

Branch: refs/heads/trunk
Commit: e4fca6aae46a3c04fc56897986a4ab4e5aa98503
Parents: f796cfd
Author: Sammi Chen 
Authored: Tue Sep 18 18:37:49 2018 +0800
Committer: Sammi Chen 
Committed: Tue Sep 18 18:37:49 2018 +0800

--
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e4fca6aa/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 3669ffb..275ae6e 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1347,7 +1347,7 @@
   
 com.aliyun.oss
 aliyun-sdk-oss
-2.8.3
+3.0.0
 
   
 org.apache.httpcomponents


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15759. AliyunOSS: Update oss-sdk version to 3.0.0. Contributed by Jinhu Wu.

2018-09-18 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 46eeba623 -> 53528d5ec


HADOOP-15759. AliyunOSS: Update oss-sdk version to 3.0.0. Contributed by Jinhu 
Wu.

(cherry picked from commit e4fca6aae46a3c04fc56897986a4ab4e5aa98503)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/53528d5e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/53528d5e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/53528d5e

Branch: refs/heads/branch-3.1
Commit: 53528d5ec5ab812484e2d5a6622d58ba8c86a5b0
Parents: 46eeba6
Author: Sammi Chen 
Authored: Tue Sep 18 18:37:49 2018 +0800
Committer: Sammi Chen 
Committed: Tue Sep 18 18:44:01 2018 +0800

--
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/53528d5e/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 2017503..771ede9 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1188,7 +1188,7 @@
   
 com.aliyun.oss
 aliyun-sdk-oss
-2.8.3
+3.0.0
 
   
 org.apache.httpcomponents


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15759. AliyunOSS: Update oss-sdk version to 3.0.0. Contributed by Jinhu Wu.

2018-09-18 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 f0537b516 -> 61847f4ea


HADOOP-15759. AliyunOSS: Update oss-sdk version to 3.0.0. Contributed by Jinhu 
Wu.

(cherry picked from commit e4fca6aae46a3c04fc56897986a4ab4e5aa98503)
(cherry picked from commit 53528d5ec5ab812484e2d5a6622d58ba8c86a5b0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/61847f4e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/61847f4e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/61847f4e

Branch: refs/heads/branch-3.0
Commit: 61847f4ea07d7c7698b0658e097a9937db700b56
Parents: f0537b5
Author: Sammi Chen 
Authored: Tue Sep 18 18:37:49 2018 +0800
Committer: Sammi Chen 
Committed: Tue Sep 18 18:45:49 2018 +0800

--
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/61847f4e/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 5c2edf9..b4b2bac 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1167,7 +1167,7 @@
   
 com.aliyun.oss
 aliyun-sdk-oss
-2.8.3
+3.0.0
 
   
 org.apache.httpcomponents


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15759. AliyunOSS: Update oss-sdk version to 3.0.0. Contributed by Jinhu Wu.

2018-09-18 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 9bd261613 -> f6ab07e28


HADOOP-15759. AliyunOSS: Update oss-sdk version to 3.0.0. Contributed by Jinhu 
Wu.

(cherry picked from commit e4fca6aae46a3c04fc56897986a4ab4e5aa98503)
(cherry picked from commit 53528d5ec5ab812484e2d5a6622d58ba8c86a5b0)
(cherry picked from commit 61847f4ea07d7c7698b0658e097a9937db700b56)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f6ab07e2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f6ab07e2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f6ab07e2

Branch: refs/heads/branch-2
Commit: f6ab07e28c346e7e59b56f7041d8eac028ba5fcc
Parents: 9bd2616
Author: Sammi Chen 
Authored: Tue Sep 18 18:37:49 2018 +0800
Committer: Sammi Chen 
Committed: Tue Sep 18 20:18:15 2018 +0800

--
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f6ab07e2/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index f1ab70d..288891c 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1109,7 +1109,7 @@
   
 com.aliyun.oss
 aliyun-sdk-oss
-2.8.3
+3.0.0
 
   
 org.apache.httpcomponents


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15759. AliyunOSS: Update oss-sdk version to 3.0.0. Contributed by Jinhu Wu.

2018-09-18 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 19edb89f9 -> 9208602a6


HADOOP-15759. AliyunOSS: Update oss-sdk version to 3.0.0. Contributed by Jinhu 
Wu.

(cherry picked from commit e4fca6aae46a3c04fc56897986a4ab4e5aa98503)
(cherry picked from commit 53528d5ec5ab812484e2d5a6622d58ba8c86a5b0)
(cherry picked from commit 61847f4ea07d7c7698b0658e097a9937db700b56)
(cherry picked from commit f6ab07e28c346e7e59b56f7041d8eac028ba5fcc)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9208602a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9208602a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9208602a

Branch: refs/heads/branch-2.9
Commit: 9208602a686593c9c6267f1788e2a305b7a87fef
Parents: 19edb89
Author: Sammi Chen 
Authored: Tue Sep 18 18:37:49 2018 +0800
Committer: Sammi Chen 
Committed: Tue Sep 18 20:20:15 2018 +0800

--
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9208602a/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 06cfb32..9e285e4 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1091,7 +1091,7 @@
   
 com.aliyun.oss
 aliyun-sdk-oss
-2.8.3
+3.0.0
 
   
 org.apache.httpcomponents


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15671. AliyunOSS: Support Assume Roles in AliyunOSS. Contributed by Jinhu Wu.

2018-09-25 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/trunk 93b0f540e -> 2b635125f


HADOOP-15671. AliyunOSS: Support Assume Roles in AliyunOSS. Contributed by 
Jinhu Wu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2b635125
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2b635125
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2b635125

Branch: refs/heads/trunk
Commit: 2b635125fb059fc204ed35bc0e264c42dd3a9fe9
Parents: 93b0f54
Author: Sammi Chen 
Authored: Tue Sep 25 19:48:30 2018 +0800
Committer: Sammi Chen 
Committed: Tue Sep 25 19:48:30 2018 +0800

--
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  |   5 +-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java |   5 +-
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|   8 +-
 .../oss/AssumedRoleCredentialProvider.java  | 115 +++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  22 
 .../site/markdown/tools/hadoop-aliyun/index.md  |  50 
 .../fs/aliyun/oss/TestAliyunCredentials.java|  55 -
 7 files changed, 248 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b635125/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
index 0a833b2..17f21cb 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -120,7 +120,8 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 if (null == partETags) {
   throw new IOException("Failed to multipart upload to oss, abort 
it.");
 }
-store.completeMultipartUpload(key, uploadId, partETags);
+store.completeMultipartUpload(key, uploadId,
+new ArrayList<>(partETags));
   }
 } finally {
   removePartFiles();
@@ -129,7 +130,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
   }
 
   @Override
-  public void write(int b) throws IOException {
+  public synchronized void write(int b) throws IOException {
 singleByte[0] = (byte)b;
 write(singleByte, 0, 1);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b635125/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index dc5f99ee..7639eb3 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -149,7 +149,7 @@ public class AliyunOSSFileSystemStore {
   "null or empty. Please set proper endpoint with 'fs.oss.endpoint'.");
 }
 CredentialsProvider provider =
-AliyunOSSUtils.getCredentialsProvider(conf);
+AliyunOSSUtils.getCredentialsProvider(uri, conf);
 ossClient = new OSSClient(endPoint, provider, clientConf);
 uploadPartSize = AliyunOSSUtils.getMultipartSizeProperty(conf,
 MULTIPART_UPLOAD_PART_SIZE_KEY, MULTIPART_UPLOAD_PART_SIZE_DEFAULT);
@@ -168,6 +168,8 @@ public class AliyunOSSFileSystemStore {
   multipartThreshold = 1024 * 1024 * 1024;
 }
 
+bucketName = uri.getHost();
+
 String cannedACLName = conf.get(CANNED_ACL_KEY, CANNED_ACL_DEFAULT);
 if (StringUtils.isNotEmpty(cannedACLName)) {
   CannedAccessControlList cannedACL =
@@ -176,7 +178,6 @@ public class AliyunOSSFileSystemStore {
 }
 
 maxKeys = conf.getInt(MAX_PAGING_KEYS_KEY, MAX_PAGING_KEYS_DEFAULT);
-bucketName = uri.getHost();
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b635125/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
index a7536d6..3e02d7f 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
+++ 

hadoop git commit: HADOOP-15671. AliyunOSS: Support Assume Roles in AliyunOSS. Contributed by Jinhu Wu.

2018-09-25 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 47306cc2d -> 5da3e8359


HADOOP-15671. AliyunOSS: Support Assume Roles in AliyunOSS. Contributed by 
Jinhu Wu.

(cherry picked from commit 2b635125fb059fc204ed35bc0e264c42dd3a9fe9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5da3e835
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5da3e835
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5da3e835

Branch: refs/heads/branch-3.1
Commit: 5da3e8359757c0c1afaccc1d3a0f2bdc453e0311
Parents: 47306cc
Author: Sammi Chen 
Authored: Tue Sep 25 19:48:30 2018 +0800
Committer: Sammi Chen 
Committed: Tue Sep 25 19:50:39 2018 +0800

--
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  |   5 +-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java |   5 +-
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|   8 +-
 .../oss/AssumedRoleCredentialProvider.java  | 115 +++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  22 
 .../site/markdown/tools/hadoop-aliyun/index.md  |  50 
 .../fs/aliyun/oss/TestAliyunCredentials.java|  55 -
 7 files changed, 248 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5da3e835/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
index 0a833b2..17f21cb 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -120,7 +120,8 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 if (null == partETags) {
   throw new IOException("Failed to multipart upload to oss, abort 
it.");
 }
-store.completeMultipartUpload(key, uploadId, partETags);
+store.completeMultipartUpload(key, uploadId,
+new ArrayList<>(partETags));
   }
 } finally {
   removePartFiles();
@@ -129,7 +130,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
   }
 
   @Override
-  public void write(int b) throws IOException {
+  public synchronized void write(int b) throws IOException {
 singleByte[0] = (byte)b;
 write(singleByte, 0, 1);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5da3e835/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index c63a05b..0f418d7 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -149,7 +149,7 @@ public class AliyunOSSFileSystemStore {
   "null or empty. Please set proper endpoint with 'fs.oss.endpoint'.");
 }
 CredentialsProvider provider =
-AliyunOSSUtils.getCredentialsProvider(conf);
+AliyunOSSUtils.getCredentialsProvider(uri, conf);
 ossClient = new OSSClient(endPoint, provider, clientConf);
 uploadPartSize = AliyunOSSUtils.getMultipartSizeProperty(conf,
 MULTIPART_UPLOAD_PART_SIZE_KEY, MULTIPART_UPLOAD_PART_SIZE_DEFAULT);
@@ -168,6 +168,8 @@ public class AliyunOSSFileSystemStore {
   multipartThreshold = 1024 * 1024 * 1024;
 }
 
+bucketName = uri.getHost();
+
 String cannedACLName = conf.get(CANNED_ACL_KEY, CANNED_ACL_DEFAULT);
 if (StringUtils.isNotEmpty(cannedACLName)) {
   CannedAccessControlList cannedACL =
@@ -176,7 +178,6 @@ public class AliyunOSSFileSystemStore {
 }
 
 maxKeys = conf.getInt(MAX_PAGING_KEYS_KEY, MAX_PAGING_KEYS_DEFAULT);
-bucketName = uri.getHost();
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5da3e835/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
index 2fe06c1..1f95965 100644
--- 
a/hadoop-tools/hadoop

hadoop git commit: HADOOP-15671. AliyunOSS: Support Assume Roles in AliyunOSS. Contributed by Jinhu Wu.

2018-09-25 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 2449795b8 -> 85e00477b


HADOOP-15671. AliyunOSS: Support Assume Roles in AliyunOSS. Contributed by 
Jinhu Wu.

(cherry picked from commit 2b635125fb059fc204ed35bc0e264c42dd3a9fe9)
(cherry picked from commit 5da3e8359757c0c1afaccc1d3a0f2bdc453e0311)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/85e00477
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/85e00477
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/85e00477

Branch: refs/heads/branch-3.0
Commit: 85e00477b8b3ee9c007aa111429588b6616128e2
Parents: 2449795
Author: Sammi Chen 
Authored: Tue Sep 25 19:48:30 2018 +0800
Committer: Sammi Chen 
Committed: Tue Sep 25 19:53:03 2018 +0800

--
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  |   5 +-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java |   5 +-
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|   8 +-
 .../oss/AssumedRoleCredentialProvider.java  | 115 +++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  22 
 .../site/markdown/tools/hadoop-aliyun/index.md  |  50 
 .../fs/aliyun/oss/TestAliyunCredentials.java|  55 -
 7 files changed, 248 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/85e00477/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
index 0a833b2..17f21cb 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -120,7 +120,8 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 if (null == partETags) {
   throw new IOException("Failed to multipart upload to oss, abort 
it.");
 }
-store.completeMultipartUpload(key, uploadId, partETags);
+store.completeMultipartUpload(key, uploadId,
+new ArrayList<>(partETags));
   }
 } finally {
   removePartFiles();
@@ -129,7 +130,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
   }
 
   @Override
-  public void write(int b) throws IOException {
+  public synchronized void write(int b) throws IOException {
 singleByte[0] = (byte)b;
 write(singleByte, 0, 1);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/85e00477/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index c63a05b..0f418d7 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -149,7 +149,7 @@ public class AliyunOSSFileSystemStore {
   "null or empty. Please set proper endpoint with 'fs.oss.endpoint'.");
 }
 CredentialsProvider provider =
-AliyunOSSUtils.getCredentialsProvider(conf);
+AliyunOSSUtils.getCredentialsProvider(uri, conf);
 ossClient = new OSSClient(endPoint, provider, clientConf);
 uploadPartSize = AliyunOSSUtils.getMultipartSizeProperty(conf,
 MULTIPART_UPLOAD_PART_SIZE_KEY, MULTIPART_UPLOAD_PART_SIZE_DEFAULT);
@@ -168,6 +168,8 @@ public class AliyunOSSFileSystemStore {
   multipartThreshold = 1024 * 1024 * 1024;
 }
 
+bucketName = uri.getHost();
+
 String cannedACLName = conf.get(CANNED_ACL_KEY, CANNED_ACL_DEFAULT);
 if (StringUtils.isNotEmpty(cannedACLName)) {
   CannedAccessControlList cannedACL =
@@ -176,7 +178,6 @@ public class AliyunOSSFileSystemStore {
 }
 
 maxKeys = conf.getInt(MAX_PAGING_KEYS_KEY, MAX_PAGING_KEYS_DEFAULT);
-bucketName = uri.getHost();
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/85e00477/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunO

hadoop git commit: HADOOP-15671. AliyunOSS: Support Assume Roles in AliyunOSS. Contributed by Jinhu Wu.

2018-09-25 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 693792583 -> c617dba49


HADOOP-15671. AliyunOSS: Support Assume Roles in AliyunOSS. Contributed by 
Jinhu Wu.

(cherry picked from commit 2b635125fb059fc204ed35bc0e264c42dd3a9fe9)
(cherry picked from commit 5da3e8359757c0c1afaccc1d3a0f2bdc453e0311)
(cherry picked from commit 85e00477b8b3ee9c007aa111429588b6616128e2)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c617dba4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c617dba4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c617dba4

Branch: refs/heads/branch-2
Commit: c617dba49770d03f9a9a519f6353b2a2afc3a930
Parents: 6937925
Author: Sammi Chen 
Authored: Tue Sep 25 19:48:30 2018 +0800
Committer: Sammi Chen 
Committed: Tue Sep 25 19:55:13 2018 +0800

--
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  |   5 +-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java |   5 +-
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|   8 +-
 .../oss/AssumedRoleCredentialProvider.java  | 115 +++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  22 
 .../site/markdown/tools/hadoop-aliyun/index.md  |  50 
 .../fs/aliyun/oss/TestAliyunCredentials.java|  55 -
 7 files changed, 248 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c617dba4/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
index 42cb0b1..353b2da 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -124,7 +124,8 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 if (null == partETags) {
   throw new IOException("Failed to multipart upload to oss, abort 
it.");
 }
-store.completeMultipartUpload(key, uploadId, partETags);
+store.completeMultipartUpload(key, uploadId,
+new ArrayList<>(partETags));
   }
 } finally {
   removePartFiles();
@@ -133,7 +134,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
   }
 
   @Override
-  public void write(int b) throws IOException {
+  public synchronized void write(int b) throws IOException {
 singleByte[0] = (byte)b;
 write(singleByte, 0, 1);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c617dba4/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index 0f99cd6..f13ac32 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -149,7 +149,7 @@ public class AliyunOSSFileSystemStore {
   "null or empty. Please set proper endpoint with 'fs.oss.endpoint'.");
 }
 CredentialsProvider provider =
-AliyunOSSUtils.getCredentialsProvider(conf);
+AliyunOSSUtils.getCredentialsProvider(uri, conf);
 ossClient = new OSSClient(endPoint, provider, clientConf);
 uploadPartSize = AliyunOSSUtils.getMultipartSizeProperty(conf,
 MULTIPART_UPLOAD_PART_SIZE_KEY, MULTIPART_UPLOAD_PART_SIZE_DEFAULT);
@@ -168,6 +168,8 @@ public class AliyunOSSFileSystemStore {
   multipartThreshold = 1024 * 1024 * 1024;
 }
 
+bucketName = uri.getHost();
+
 String cannedACLName = conf.get(CANNED_ACL_KEY, CANNED_ACL_DEFAULT);
 if (StringUtils.isNotEmpty(cannedACLName)) {
   CannedAccessControlList cannedACL =
@@ -176,7 +178,6 @@ public class AliyunOSSFileSystemStore {
 }
 
 maxKeys = conf.getInt(MAX_PAGING_KEYS_KEY, MAX_PAGING_KEYS_DEFAULT);
-bucketName = uri.getHost();
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c617dba4/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
 
b/hadoop-tools/ha

hadoop git commit: HADOOP-15671. AliyunOSS: Support Assume Roles in AliyunOSS. Contributed by Jinhu Wu.

2018-09-25 Thread sammichen
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 cebfcd952 -> 76f183497


HADOOP-15671. AliyunOSS: Support Assume Roles in AliyunOSS. Contributed by 
Jinhu Wu.

(cherry picked from commit 2b635125fb059fc204ed35bc0e264c42dd3a9fe9)
(cherry picked from commit 5da3e8359757c0c1afaccc1d3a0f2bdc453e0311)
(cherry picked from commit 85e00477b8b3ee9c007aa111429588b6616128e2)
(cherry picked from commit c617dba49770d03f9a9a519f6353b2a2afc3a930)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/76f18349
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/76f18349
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/76f18349

Branch: refs/heads/branch-2.9
Commit: 76f183497660f36bf2ff00f29c187bed01ecaa64
Parents: cebfcd9
Author: Sammi Chen 
Authored: Tue Sep 25 19:48:30 2018 +0800
Committer: Sammi Chen 
Committed: Tue Sep 25 19:56:30 2018 +0800

--
 .../aliyun/oss/AliyunOSSBlockOutputStream.java  |   5 +-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java |   5 +-
 .../hadoop/fs/aliyun/oss/AliyunOSSUtils.java|   8 +-
 .../oss/AssumedRoleCredentialProvider.java  | 115 +++
 .../apache/hadoop/fs/aliyun/oss/Constants.java  |  22 
 .../site/markdown/tools/hadoop-aliyun/index.md  |  50 
 .../fs/aliyun/oss/TestAliyunCredentials.java|  55 -
 7 files changed, 248 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/76f18349/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
index 42cb0b1..353b2da 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSBlockOutputStream.java
@@ -124,7 +124,8 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
 if (null == partETags) {
   throw new IOException("Failed to multipart upload to oss, abort 
it.");
 }
-store.completeMultipartUpload(key, uploadId, partETags);
+store.completeMultipartUpload(key, uploadId,
+new ArrayList<>(partETags));
   }
 } finally {
   removePartFiles();
@@ -133,7 +134,7 @@ public class AliyunOSSBlockOutputStream extends 
OutputStream {
   }
 
   @Override
-  public void write(int b) throws IOException {
+  public synchronized void write(int b) throws IOException {
 singleByte[0] = (byte)b;
 write(singleByte, 0, 1);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/76f18349/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
index 0f99cd6..f13ac32 100644
--- 
a/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
@@ -149,7 +149,7 @@ public class AliyunOSSFileSystemStore {
   "null or empty. Please set proper endpoint with 'fs.oss.endpoint'.");
 }
 CredentialsProvider provider =
-AliyunOSSUtils.getCredentialsProvider(conf);
+AliyunOSSUtils.getCredentialsProvider(uri, conf);
 ossClient = new OSSClient(endPoint, provider, clientConf);
 uploadPartSize = AliyunOSSUtils.getMultipartSizeProperty(conf,
 MULTIPART_UPLOAD_PART_SIZE_KEY, MULTIPART_UPLOAD_PART_SIZE_DEFAULT);
@@ -168,6 +168,8 @@ public class AliyunOSSFileSystemStore {
   multipartThreshold = 1024 * 1024 * 1024;
 }
 
+bucketName = uri.getHost();
+
 String cannedACLName = conf.get(CANNED_ACL_KEY, CANNED_ACL_DEFAULT);
 if (StringUtils.isNotEmpty(cannedACLName)) {
   CannedAccessControlList cannedACL =
@@ -176,7 +178,6 @@ public class AliyunOSSFileSystemStore {
 }
 
 maxKeys = conf.getInt(MAX_PAGING_KEYS_KEY, MAX_PAGING_KEYS_DEFAULT);
-bucketName = uri.getHost();
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/76f18349/hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
--
diff --git 
a/hadoop-tools/hadoop-aliyun/src/main/jav

[hadoop] branch trunk updated: HDFS-13369. Fix for FSCK Report broken with RequestHedgingProxyProvider (#4917)

2022-09-30 Thread sammichen
This is an automated email from the ASF dual-hosted git repository.

sammichen pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4891bf50491 HDFS-13369. Fix for FSCK Report broken with 
RequestHedgingProxyProvider (#4917)
4891bf50491 is described below

commit 4891bf50491373306b89cb5cc310b9d5ebf35156
Author: Navink 
AuthorDate: Fri Sep 30 20:58:12 2022 +0530

HDFS-13369. Fix for FSCK Report broken with RequestHedgingProxyProvider 
(#4917)

Contributed-by: navinko 
---
 .../main/java/org/apache/hadoop/ipc/Client.java|  16 +++
 .../namenode/ha/RequestHedgingProxyProvider.java   |  40 --
 .../ha/TestRequestHedgingProxyProvider.java|   4 +
 .../hdfs/server/namenode/TestAllowFormat.java  |   3 +-
 .../hadoop/hdfs/server/namenode/ha/HATestUtil.java | 134 ++---
 .../namenode/ha/TestDelegationTokensWithHA.java|   2 +-
 .../hadoop/hdfs/server/namenode/ha/TestHAFsck.java |  34 --
 7 files changed, 170 insertions(+), 63 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index 20fc9efe57e..f0d4f8921a3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -124,12 +124,28 @@ public class Client implements AutoCloseable {
 Preconditions.checkArgument(cid != RpcConstants.INVALID_CALL_ID);
 Preconditions.checkState(callId.get() == null);
 Preconditions.checkArgument(rc != RpcConstants.INVALID_RETRY_COUNT);
+setCallIdAndRetryCountUnprotected(cid, rc, externalHandler);
+  }
 
+  public static void setCallIdAndRetryCountUnprotected(Integer cid, int rc,
+  Object externalHandler) {
 callId.set(cid);
 retryCount.set(rc);
 EXTERNAL_CALL_HANDLER.set(externalHandler);
   }
 
+  public static int getCallId() {
+return callId.get() != null ? callId.get() : nextCallId();
+  }
+
+  public static int getRetryCount() {
+return retryCount.get() != null ? retryCount.get() : 0;
+  }
+
+  public static Object getExternalHandler() {
+return EXTERNAL_CALL_HANDLER.get();
+  }
+
   private final ConcurrentMap connections =
   new ConcurrentHashMap<>();
   private final Object putLock = new Object();
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
index 9011b25eda0..5e83fff6b78 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
@@ -18,7 +18,6 @@
 package org.apache.hadoop.hdfs.server.namenode.ha;
 
 import java.io.IOException;
-import java.lang.reflect.InvocationHandler;
 import java.lang.reflect.InvocationTargetException;
 import java.lang.reflect.Method;
 import java.lang.reflect.Proxy;
@@ -27,20 +26,24 @@ import java.util.HashMap;
 import java.util.Map;
 import java.util.concurrent.Callable;
 import java.util.concurrent.CompletionService;
+import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorCompletionService;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
-import java.util.concurrent.ExecutionException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.retry.MultiException;
+import org.apache.hadoop.ipc.Client;
+import org.apache.hadoop.ipc.Client.ConnectionId;
+import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.ipc.RpcInvocationHandler;
 import org.apache.hadoop.ipc.StandbyException;
 
-import org.apache.hadoop.io.retry.MultiException;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
 /**
  * A FailoverProxyProvider implementation that technically does not "failover"
  * per-se. It constructs a wrapper proxy that sends the request to ALL
@@ -55,7 +58,7 @@ public class RequestHedgingProxyProvider extends
   public static final Logger LOG =
   LoggerFactory.getLogger(RequestHedgingProxyProvider.class);
 
-  class RequestHedgingInvocationHandler implements InvocationHandler {
+  class RequestHedgingInvocationHandler implements RpcInvocationHandler {
 
 final Map> targetProxies;
 // Proxy of the active nn
@@ -123,11 +126,18 @@ public class RequestHedgingProxyProvider extends
   }

[hadoop] branch branch-3.3.5 updated: HDFS-13369. Fix for FSCK Report broken with RequestHedgingProxyProvider (#4917)

2022-10-12 Thread sammichen
This is an automated email from the ASF dual-hosted git repository.

sammichen pushed a commit to branch branch-3.3.5
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3.5 by this push:
 new 802494e3a4f HDFS-13369. Fix for FSCK Report broken with 
RequestHedgingProxyProvider (#4917)
802494e3a4f is described below

commit 802494e3a4ff11ac3027296d90fcf7f9b3a56454
Author: Navink 
AuthorDate: Fri Sep 30 20:58:12 2022 +0530

HDFS-13369. Fix for FSCK Report broken with RequestHedgingProxyProvider 
(#4917)

Contributed-by: navinko 
(cherry picked from commit 4891bf50491373306b89cb5cc310b9d5ebf35156)
---
 .../main/java/org/apache/hadoop/ipc/Client.java|  16 +++
 .../namenode/ha/RequestHedgingProxyProvider.java   |  40 --
 .../ha/TestRequestHedgingProxyProvider.java|   4 +
 .../hdfs/server/namenode/TestAllowFormat.java  |   3 +-
 .../hadoop/hdfs/server/namenode/ha/HATestUtil.java | 134 ++---
 .../namenode/ha/TestDelegationTokensWithHA.java|   2 +-
 .../hadoop/hdfs/server/namenode/ha/TestHAFsck.java |  34 --
 7 files changed, 170 insertions(+), 63 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index d21c8073cf1..2e51c63389b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -123,12 +123,28 @@ public class Client implements AutoCloseable {
 Preconditions.checkArgument(cid != RpcConstants.INVALID_CALL_ID);
 Preconditions.checkState(callId.get() == null);
 Preconditions.checkArgument(rc != RpcConstants.INVALID_RETRY_COUNT);
+setCallIdAndRetryCountUnprotected(cid, rc, externalHandler);
+  }
 
+  public static void setCallIdAndRetryCountUnprotected(Integer cid, int rc,
+  Object externalHandler) {
 callId.set(cid);
 retryCount.set(rc);
 EXTERNAL_CALL_HANDLER.set(externalHandler);
   }
 
+  public static int getCallId() {
+return callId.get() != null ? callId.get() : nextCallId();
+  }
+
+  public static int getRetryCount() {
+return retryCount.get() != null ? retryCount.get() : 0;
+  }
+
+  public static Object getExternalHandler() {
+return EXTERNAL_CALL_HANDLER.get();
+  }
+
   private final ConcurrentMap connections =
   new ConcurrentHashMap<>();
   private final Object putLock = new Object();
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
index 9011b25eda0..5e83fff6b78 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java
@@ -18,7 +18,6 @@
 package org.apache.hadoop.hdfs.server.namenode.ha;
 
 import java.io.IOException;
-import java.lang.reflect.InvocationHandler;
 import java.lang.reflect.InvocationTargetException;
 import java.lang.reflect.Method;
 import java.lang.reflect.Proxy;
@@ -27,20 +26,24 @@ import java.util.HashMap;
 import java.util.Map;
 import java.util.concurrent.Callable;
 import java.util.concurrent.CompletionService;
+import java.util.concurrent.ExecutionException;
 import java.util.concurrent.ExecutorCompletionService;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
-import java.util.concurrent.ExecutionException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.retry.MultiException;
+import org.apache.hadoop.ipc.Client;
+import org.apache.hadoop.ipc.Client.ConnectionId;
+import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.ipc.RpcInvocationHandler;
 import org.apache.hadoop.ipc.StandbyException;
 
-import org.apache.hadoop.io.retry.MultiException;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
 /**
  * A FailoverProxyProvider implementation that technically does not "failover"
  * per-se. It constructs a wrapper proxy that sends the request to ALL
@@ -55,7 +58,7 @@ public class RequestHedgingProxyProvider extends
   public static final Logger LOG =
   LoggerFactory.getLogger(RequestHedgingProxyProvider.class);
 
-  class RequestHedgingInvocationHandler implements InvocationHandler {
+  class RequestHedgingInvocationHandler implements RpcInvocationHandler {
 
 final Map> targetProxies;
 // Proxy of the active nn
@@ -123,11 +126

[hadoop] branch trunk updated (926993cb73f -> b5e8269d9b4)

2023-03-27 Thread sammichen
This is an automated email from the ASF dual-hosted git repository.

sammichen pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 926993cb73f YARN-11376. [Federation] Support 
updateNodeResource、refreshNodesResources API's for Federation. (#5496)
 add b5e8269d9b4 HADOOP-18458: AliyunOSSBlockOutputStream to support 
heap/off-heap buffer before uploading data to OSS (#4912)

No new revisions were added by this update.

Summary of changes:
 .../fs/aliyun/oss/AliyunOSSBlockOutputStream.java  | 213 +++
 .../hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java  |  33 +-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java|  63 +++-
 .../org/apache/hadoop/fs/aliyun/oss/Constants.java |  53 +++
 .../hadoop/fs/aliyun/oss/OSSDataBlocks.java}   | 417 ++---
 .../statistics/BlockOutputStreamStatistics.java|  50 ++-
 .../statistics/impl/OutputStreamStatistics.java|  98 +
 .../aliyun/oss/statistics/impl}/package-info.java  |   9 +-
 .../fs/aliyun/oss/statistics}/package-info.java|   4 +-
 .../src/site/markdown/tools/hadoop-aliyun/index.md |  50 ++-
 .../aliyun/oss/TestAliyunOSSBlockOutputStream.java | 193 +-
 11 files changed, 917 insertions(+), 266 deletions(-)
 copy 
hadoop-tools/{hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ADataBlocks.java
 => 
hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/OSSDataBlocks.java} 
(72%)
 copy 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerUpdateType.java
 => 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/statistics/BlockOutputStreamStatistics.java
 (54%)
 create mode 100644 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/statistics/impl/OutputStreamStatistics.java
 copy 
{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec
 => 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/statistics/impl}/package-info.java
 (79%)
 copy 
{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util
 => 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/statistics}/package-info.java
 (84%)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14356. Implement HDFS cache on SCM with native PMDK libs. Contributed by Feilong He.

2019-06-05 Thread sammichen
This is an automated email from the ASF dual-hosted git repository.

sammichen pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d1aad44  HDFS-14356. Implement HDFS cache on SCM with native PMDK 
libs. Contributed by Feilong He.
d1aad44 is described below

commit d1aad444907e1fc5314e8e64529e57c51ed7561c
Author: Sammi Chen 
AuthorDate: Wed Jun 5 21:33:00 2019 +0800

HDFS-14356. Implement HDFS cache on SCM with native PMDK libs. Contributed 
by Feilong He.
---
 BUILDING.txt   |  28 +++
 dev-support/bin/dist-copynativelibs|   8 +
 hadoop-common-project/hadoop-common/pom.xml|   2 +
 .../hadoop-common/src/CMakeLists.txt   |  21 ++
 .../hadoop-common/src/config.h.cmake   |   1 +
 .../org/apache/hadoop/io/nativeio/NativeIO.java| 135 ++-
 .../src/org/apache/hadoop/io/nativeio/NativeIO.c   | 252 +
 .../src/org/apache/hadoop/io/nativeio/pmdk_load.c  | 106 +
 .../src/org/apache/hadoop/io/nativeio/pmdk_load.h  |  95 
 .../apache/hadoop/io/nativeio/TestNativeIO.java| 153 +
 .../datanode/fsdataset/impl/FsDatasetCache.java|  22 ++
 .../datanode/fsdataset/impl/FsDatasetImpl.java |   8 +
 .../datanode/fsdataset/impl/FsDatasetUtil.java |  22 ++
 .../datanode/fsdataset/impl/MappableBlock.java |   6 +
 .../fsdataset/impl/MappableBlockLoader.java|  11 +-
 .../fsdataset/impl/MappableBlockLoaderFactory.java |   4 +
 .../fsdataset/impl/MemoryMappableBlockLoader.java  |   8 +-
 .../datanode/fsdataset/impl/MemoryMappedBlock.java |   5 +
 ...der.java => NativePmemMappableBlockLoader.java} | 166 +++---
 ...MappedBlock.java => NativePmemMappedBlock.java} |  49 ++--
 .../fsdataset/impl/PmemMappableBlockLoader.java|  10 +-
 .../datanode/fsdataset/impl/PmemMappedBlock.java   |   5 +
 22 files changed, 1009 insertions(+), 108 deletions(-)

diff --git a/BUILDING.txt b/BUILDING.txt
index cc9ac17..8c57a1d 100644
--- a/BUILDING.txt
+++ b/BUILDING.txt
@@ -78,6 +78,8 @@ Optional packages:
   $ sudo apt-get install fuse libfuse-dev
 * ZStandard compression
 $ sudo apt-get install zstd
+* PMDK library for storage class memory(SCM) as HDFS cache backend
+  Please refer to http://pmem.io/ and https://github.com/pmem/pmdk
 
 
--
 Maven main modules:
@@ -262,6 +264,32 @@ Maven build goals:
invoke, run 'mvn dependency-check:aggregate'. Note that this plugin
requires maven 3.1.1 or greater.
 
+ PMDK library build options:
+
+   The Persistent Memory Development Kit (PMDK), formerly known as NVML, is a 
growing
+   collection of libraries which have been developed for various use cases, 
tuned,
+   validated to production quality, and thoroughly documented. These libraries 
are built
+   on the Direct Access (DAX) feature available in both Linux and Windows, 
which allows
+   applications directly load/store access to persistent memory by 
memory-mapping files
+   on a persistent memory aware file system.
+
+   It is currently an optional component, meaning that Hadoop can be built 
without
+   this dependency. Please Note the library is used via dynamic module. For 
getting
+   more details please refer to the official sites:
+   http://pmem.io/ and https://github.com/pmem/pmdk.
+
+  * -Drequire.pmdk is used to build the project with PMDK libraries forcibly. 
With this
+option provided, the build will fail if libpmem library is not found. If 
this option
+is not given, the build will generate a version of Hadoop with 
libhadoop.so.
+And storage class memory(SCM) backed HDFS cache is still supported without 
PMDK involved.
+Because PMDK can bring better caching write/read performance, it is 
recommended to build
+the project with this option if user plans to use SCM backed HDFS cache.
+  * -Dpmdk.lib is used to specify a nonstandard location for PMDK libraries if 
they are not
+under /usr/lib or /usr/lib64.
+  * -Dbundle.pmdk is used to copy the specified libpmem libraries into the 
distribution tar
+package. This option requires that -Dpmdk.lib is specified. With 
-Dbundle.pmdk provided,
+the build will fail if -Dpmdk.lib is not specified.
+
 
--
 Building components separately
 
diff --git a/dev-support/bin/dist-copynativelibs 
b/dev-support/bin/dist-copynativelibs
index 67d2edf..4a783f0 100755
--- a/dev-support/bin/dist-copynativelibs
+++ b/dev-support/bin/dist-copynativelibs
@@ -96,6 +96,12 @@ for i in "$@"; do
 --isalbundle=*)
   ISALBUNDLE=${i#*=}
 ;;
+--pmdklib=*)
+  PMDKLIB=${i#*=}
+;;
+--pmdkbundle=*)
+  PMDKBUNDLE=${i#*=}
+;;
 --opensslbinbundle=*)
   OPENSSLBINBUNDLE

[hadoop] branch trunk updated: HDDS-1653. Add option to "ozone scmcli printTopology" to order the output acccording to topology layer. Contributed by Xiaoyu Yao. (#1067)

2019-07-18 Thread sammichen
This is an automated email from the ASF dual-hosted git repository.

sammichen pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4e66cb9  HDDS-1653. Add option to "ozone scmcli printTopology" to 
order the output acccording to topology layer. Contributed by Xiaoyu Yao.  
(#1067)
4e66cb9 is described below

commit 4e66cb9333bc7af9895e27b1a9c8a71577c82c73
Author: Xiaoyu Yao 
AuthorDate: Thu Jul 18 20:00:49 2019 -0700

HDDS-1653. Add option to "ozone scmcli printTopology" to order the output 
acccording to topology layer. Contributed by Xiaoyu Yao.  (#1067)

* HDDS-1653. Add option to "ozone scmcli printTopology" to order the output 
acccording to topology layer. Contributed by Xiaoyu Yao.

* use ip/hostname instead of network name for -o output and add smoke test
---
 .../hadoop/hdds/scm/cli/TopologySubcommand.java| 60 +++---
 .../dist/src/main/compose/ozone-topology/test.sh   |  2 +
 .../dist/src/main/smoketest/topology/scmcli.robot  | 32 
 3 files changed, 86 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
index 6deccd1..7de2e4b 100644
--- 
a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
+++ 
b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
@@ -19,9 +19,11 @@
 package org.apache.hadoop.hdds.scm.cli;
 
 import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.scm.client.ScmClient;
 import picocli.CommandLine;
+
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
 import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONED;
 import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONING;
@@ -29,7 +31,11 @@ import static 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
 
 import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
 import java.util.List;
+import java.util.TreeSet;
 import java.util.concurrent.Callable;
 
 /**
@@ -55,6 +61,10 @@ public class TopologySubcommand implements Callable {
 stateArray.add(DECOMMISSIONED);
   }
 
+  @CommandLine.Option(names = {"-o", "--order"},
+  description = "Print Topology ordered by network location")
+  private boolean order;
+
   @Override
   public Void call() throws Exception {
 try (ScmClient scmClient = parent.createScmClient()) {
@@ -64,17 +74,51 @@ public class TopologySubcommand implements Callable {
 if (nodes != null && nodes.size() > 0) {
   // show node state
   System.out.println("State = " + state.toString());
-  // format "hostname/ipAddressnetworkLocation"
-  nodes.forEach(node -> {
-System.out.print(node.getNodeID().getHostName() + "/" +
-node.getNodeID().getIpAddress());
-System.out.println("" +
-(node.getNodeID().getNetworkLocation() != null ?
-node.getNodeID().getNetworkLocation() : "NA"));
-  });
+  if (order) {
+printOrderedByLocation(nodes);
+  } else {
+printNodesWithLocation(nodes);
+  }
 }
   }
   return null;
 }
   }
+
+  // Format
+  // Location: rack1
+  //  ipAddress(hostName)
+  private void printOrderedByLocation(List nodes) {
+HashMap> tree =
+new HashMap<>();
+for (HddsProtos.Node node : nodes) {
+  String location = node.getNodeID().getNetworkLocation();
+  if (location != null && !tree.containsKey(location)) {
+tree.put(location, new TreeSet<>());
+  }
+  
tree.get(location).add(DatanodeDetails.getFromProtoBuf(node.getNodeID()));
+}
+ArrayList locations = new ArrayList<>(tree.keySet());
+Collections.sort(locations);
+
+locations.forEach(location -> {
+  System.out.println("Location: " + location);
+  tree.get(location).forEach(node -> {
+System.out.println(" " + node.getIpAddress() + "(" + node.getHostName()
++ ")");
+  });
+});
+  }
+
+
+  // Format "ipAddress(hostName)networkLocation"
+  private void printNodesWithLocation(Collection nodes) {
+nodes.forEach(node -> {
+  System.out.print(" &

[hadoop] branch HDDS-1713 created (now 910ac64)

2019-07-19 Thread sammichen
This is an automated email from the ASF dual-hosted git repository.

sammichen pushed a change to branch HDDS-1713
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 910ac64  fix TestKeyManagerImpl and remove Asserts which doesn't stand 
in some cases

This branch includes the following new commits:

 new 6e58814  HDDS-1713. ReplicationManager fail to find proper node 
topology based Datanode details from heartbeat
 new fc1a929  fix failed unit test
 new 5ea5e06  improve unit test
 new f4df9cd  trigger another build
 new 910ac64  fix TestKeyManagerImpl and remove Asserts which doesn't stand 
in some cases

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



  1   2   >