Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-12-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/542/

No changes

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: Next Hadoop Storage Online Meetup (APAC Mandarin)

2019-12-20 Thread Chao Sun
We are also very interested to learn from this experience, and I can help
on the translation too. Please let me know if help needed.

Best,
Chao

On Fri, Dec 20, 2019 at 9:38 AM Chen Liang  wrote:

> Thanks again for driving the syncup Wei-Chiu!  I am also super interested
> in learning about this upgrade.
>
> Moving forward, I am happy to help with the translation. Please feel free
> to let me know if you need more hands on it :).
>
> Chen
>
> Wei-Chiu Chuang  于2019年12月19日周四 下午6:23写道:
>
> > Yeah, for sure. I am happy to take notes.
> >
> > In terms of getting more non-Mandarin speaking community members involved
> > (which I think is very important), maybe we can have it recorded and
> > translated. I need to work out the details with Fei but I am happy to be
> > the translator if needed.
> >
> > On Thu, Dec 19, 2019 at 4:03 PM Eric Badger
> >  wrote:
> >
> > > For those of us that don't speak Mandarin, would someone be able to
> take
> > > notes in English? I'm very interested in hearing about the experience
> in
> > > moving from Hadoop 2.x to 3.x.
> > >
> > > Eric
> > >
> > > On Thu, Dec 19, 2019 at 2:07 PM Wei-Chiu Chuang
> > >  wrote:
> > >
> > > > As you have probably aware, DiDi upgrade a large cluster from Hadoop
> 2
> > to
> > > > Hadoop3 recently.
> > > >
> > > > Fei Hui from DiDi graciously agreed to speak to us their upgrade
> > > experience
> > > > at the next APAC Mandarin Online meetup which is in two weeks.
> > > >
> > > > So stay tuned!
> > > >
> > > > Time/Date:
> > > > Jan 1 10PM (US west coast PST) / Jan 2 2pm (Beijing, China CST)
> > > >
> > >
> >
>


Re: Next Hadoop Storage Online Meetup (APAC Mandarin)

2019-12-20 Thread Chen Liang
Thanks again for driving the syncup Wei-Chiu!  I am also super interested
in learning about this upgrade.

Moving forward, I am happy to help with the translation. Please feel free
to let me know if you need more hands on it :).

Chen

Wei-Chiu Chuang  于2019年12月19日周四 下午6:23写道:

> Yeah, for sure. I am happy to take notes.
>
> In terms of getting more non-Mandarin speaking community members involved
> (which I think is very important), maybe we can have it recorded and
> translated. I need to work out the details with Fei but I am happy to be
> the translator if needed.
>
> On Thu, Dec 19, 2019 at 4:03 PM Eric Badger
>  wrote:
>
> > For those of us that don't speak Mandarin, would someone be able to take
> > notes in English? I'm very interested in hearing about the experience in
> > moving from Hadoop 2.x to 3.x.
> >
> > Eric
> >
> > On Thu, Dec 19, 2019 at 2:07 PM Wei-Chiu Chuang
> >  wrote:
> >
> > > As you have probably aware, DiDi upgrade a large cluster from Hadoop 2
> to
> > > Hadoop3 recently.
> > >
> > > Fei Hui from DiDi graciously agreed to speak to us their upgrade
> > experience
> > > at the next APAC Mandarin Online meetup which is in two weeks.
> > >
> > > So stay tuned!
> > >
> > > Time/Date:
> > > Jan 1 10PM (US west coast PST) / Jan 2 2pm (Beijing, China CST)
> > >
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-12-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1356/

[Dec 19, 2019 10:27:31 AM] (iwasakims) YARN-10037. Upgrade build tools for YARN 
Web UI v2.
[Dec 19, 2019 5:34:43 PM] (inigoiri) HDFS-14997. BPServiceActor processes 
commands from NameNode
[Dec 19, 2019 5:42:17 PM] (inigoiri) HDFS-15062. Add LOG when sendIBRs failed. 
Contributed by Fei Hui.
[Dec 19, 2019 7:37:17 PM] (gifuma) YARN-10038. [UI] Finish Time is not 
correctly parsed in the RM Apps




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStream 
   hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.server.namenode.TestRedudantBlocks 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.TestDecommissionWithStriped 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.TestDeadNodeDetection 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.

[jira] [Created] (HADOOP-16774) TestDiskChecker and TestReadWriteDiskValidator fails when run with -Pparallel-tests

2019-12-20 Thread Vinayakumar B (Jira)
Vinayakumar B created HADOOP-16774:
--

 Summary: TestDiskChecker and TestReadWriteDiskValidator fails when 
run with -Pparallel-tests
 Key: HADOOP-16774
 URL: https://issues.apache.org/jira/browse/HADOOP-16774
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B


{noformat}
$  mvn test -Pparallel-tests -Dtest=TestReadWriteDiskValidator,TestDiskChecker 
-Pnative
 {noformat}
{noformat}
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   
TestDiskChecker.testCheckDir_normal:111->_checkDirs:158->createTempDir:153 » 
NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_normal_local:180->checkDirs:205->createTempDir:153 
» NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notDir:116->_checkDirs:158->createTempFile:142 » 
NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notDir_local:185->checkDirs:205->createTempFile:142
 » NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notListable:131->_checkDirs:158->createTempDir:153 
» NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notListable_local:200->checkDirs:205->createTempDir:153
 » NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notReadable:121->_checkDirs:158->createTempDir:153 
» NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notReadable_local:190->checkDirs:205->createTempDir:153
 » NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notWritable:126->_checkDirs:158->createTempDir:153 
» NoSuchFile
[ERROR]   
TestDiskChecker.testCheckDir_notWritable_local:195->checkDirs:205->createTempDir:153
 » NoSuchFile
[ERROR]   TestReadWriteDiskValidator.testCheckFailures:114 » NoSuchFile 
/usr1/code/hadoo...
[ERROR]   TestReadWriteDiskValidator.testReadWriteDiskValidator:62 » DiskError 
Disk Chec...
[INFO] 
[ERROR] Tests run: 16, Failures: 0, Errors: 12, Skipped: 0

{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] vinayakumarb commented on issue #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2019-12-20 Thread GitBox
vinayakumarb commented on issue #1: HADOOP-16595. [pb-upgrade] Create 
hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1#issuecomment-567883149
 
 
   Thanks @ayushtkn for reviews.
   
   @Apache9 @jojochuang Please take a look latest change as well. Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] vinayakumarb commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2019-12-20 Thread GitBox
vinayakumarb commented on a change in pull request #1: HADOOP-16595. 
[pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1#discussion_r360321593
 
 

 ##
 File path: dev-support/bin/create-release
 ##
 @@ -0,0 +1,692 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+if [[ -z "${BASH_VERSINFO[0]}" ]] \
+   || [[ "${BASH_VERSINFO[0]}" -lt 3 ]] \
+   || [[ "${BASH_VERSINFO[0]}" -eq 3 && "${BASH_VERSINFO[1]}" -lt 2 ]]; then
+  echo "bash v3.2+ is required. Sorry."
+  exit 1
+fi
+
+function centered_text
+{
+  local text="$*"
+  local spacing=$(( (75+${#text}) /2 ))
+  printf "%*s\n"  ${spacing} "${text}"
+}
+
+function big_console_header
+{
+  printf "\n\n"
+  echo 
""
+  centered_text "${@}"
+  echo 
""
+  printf "\n\n"
+}
+
+## @description  Given a filename or dir, return the absolute version of it
+## @audience public
+## @stabilitystable
+## @paramdirectory
+## @replaceable  no
+## @return   0 success
+## @return   1 failure
+## @return   stdout abspath
+function hadoop_abs
+{
+  declare obj=$1
+  declare dir
+  declare fn
+  declare ret
+
+  if [[ ! -e ${obj} ]]; then
+return 1
+  elif [[ -d ${obj} ]]; then
+dir=${obj}
+  else
+dir=$(dirname -- "${obj}")
+fn=$(basename -- "${obj}")
+fn="/${fn}"
+  fi
+
+  dir=$(cd -P -- "${dir}" >/dev/null 2>/dev/null && pwd -P)
+  ret=$?
+  if [[ ${ret} = 0 ]]; then
+echo "${dir}${fn}"
+return 0
+  fi
+  return 1
+}
+
+## @description  Print a message to stderr
+## @audience public
+## @stabilitystable
+## @replaceable  no
+## @paramstring
+function hadoop_error
+{
+  echo "$*" 1>&2
+}
+
+
+function run_and_redirect
+{
+  declare logfile=$1
+  shift
+  declare res
+
+  echo "\$ ${*} > ${logfile} 2>&1"
+  # to the log
+  {
+date
+echo "cd $(pwd)"
+echo "${*}"
+  } > "${logfile}"
+  # run the actual command
+  "${@}" >> "${logfile}" 2>&1
+  res=$?
+  if [[ ${res} != 0 ]]; then
+echo
+echo "Failed!"
+echo
+exit "${res}"
+  fi
+}
+
+function hadoop_native_flags
+{
+
+  # modified version of the Yetus personality
+
+  if [[ ${NATIVE} != true ]]; then
+return
+  fi
+
+  # Based upon HADOOP-11937
+  #
+  # Some notes:
+  #
+  # - getting fuse to compile on anything but Linux
+  #   is always tricky.
+  # - Darwin assumes homebrew is in use.
+  # - HADOOP-12027 required for bzip2 on OS X.
+  # - bzip2 is broken in lots of places.
+  #   e.g, HADOOP-12027 for OS X. so no -Drequire.bzip2
+  #
+
+  case "${OSNAME}" in
+Linux)
+  # shellcheck disable=SC2086
+  echo -Pnative -Drequire.snappy -Drequire.openssl -Drequire.fuse
+;;
+Darwin)
+  echo \
+  -Pnative -Drequire.snappy  \
+  -Drequire.openssl \
+-Dopenssl.prefix=/usr/local/opt/openssl/ \
+-Dopenssl.include=/usr/local/opt/openssl/include \
+-Dopenssl.lib=/usr/local/opt/openssl/lib
+;;
+*)
+  # shellcheck disable=SC2086
+  echo \
+-Pnative \
+-Drequire.snappy -Drequire.openssl \
+-Drequire.test.libhadoop
+;;
+  esac
+}
+
+# Function to probe the exit code of the script commands,
+# and stop in the case of failure with an contextual error
+# message.
+function run()
+{
+  declare res
+  declare logfile
+
+  echo "\$ ${*}"
+  "${@}"
+  res=$?
+  if [[ ${res} != 0 ]]; then
+echo
+echo "Failed!"
+echo
+exit "${res}"
+  fi
+}
+
+function header()
+{
+  echo
+  printf "\n\n"
+  echo 
""
+  echo 
""
+  centered_text "Hadoop Thirdparty Release Creator"
+  echo 
""
+  echo 
""
+  printf "\n\n"
+  echo "Version to create  : ${HADOOP_VERSION}"
+  echo "Release Candidate Label: ${RC_LABEL##-}"
+  echo "Source Version : $

[GitHub] [hadoop-thirdparty] vinayakumarb commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2019-12-20 Thread GitBox
vinayakumarb commented on a change in pull request #1: HADOOP-16595. 
[pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1#discussion_r360321079
 
 

 ##
 File path: dev-support/bin/create-release
 ##
 @@ -0,0 +1,692 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+if [[ -z "${BASH_VERSINFO[0]}" ]] \
+   || [[ "${BASH_VERSINFO[0]}" -lt 3 ]] \
+   || [[ "${BASH_VERSINFO[0]}" -eq 3 && "${BASH_VERSINFO[1]}" -lt 2 ]]; then
+  echo "bash v3.2+ is required. Sorry."
+  exit 1
+fi
+
+function centered_text
+{
+  local text="$*"
+  local spacing=$(( (75+${#text}) /2 ))
+  printf "%*s\n"  ${spacing} "${text}"
+}
+
+function big_console_header
+{
+  printf "\n\n"
+  echo 
""
+  centered_text "${@}"
+  echo 
""
+  printf "\n\n"
+}
+
+## @description  Given a filename or dir, return the absolute version of it
+## @audience public
+## @stabilitystable
+## @paramdirectory
+## @replaceable  no
+## @return   0 success
+## @return   1 failure
+## @return   stdout abspath
+function hadoop_abs
+{
+  declare obj=$1
+  declare dir
+  declare fn
+  declare ret
+
+  if [[ ! -e ${obj} ]]; then
+return 1
+  elif [[ -d ${obj} ]]; then
+dir=${obj}
+  else
+dir=$(dirname -- "${obj}")
+fn=$(basename -- "${obj}")
+fn="/${fn}"
+  fi
+
+  dir=$(cd -P -- "${dir}" >/dev/null 2>/dev/null && pwd -P)
+  ret=$?
+  if [[ ${ret} = 0 ]]; then
+echo "${dir}${fn}"
+return 0
+  fi
+  return 1
+}
+
+## @description  Print a message to stderr
+## @audience public
+## @stabilitystable
+## @replaceable  no
+## @paramstring
+function hadoop_error
+{
+  echo "$*" 1>&2
+}
+
+
+function run_and_redirect
+{
+  declare logfile=$1
+  shift
+  declare res
+
+  echo "\$ ${*} > ${logfile} 2>&1"
+  # to the log
+  {
+date
+echo "cd $(pwd)"
+echo "${*}"
+  } > "${logfile}"
+  # run the actual command
+  "${@}" >> "${logfile}" 2>&1
+  res=$?
+  if [[ ${res} != 0 ]]; then
+echo
+echo "Failed!"
+echo
+exit "${res}"
+  fi
+}
+
+function hadoop_native_flags
+{
+
+  # modified version of the Yetus personality
+
+  if [[ ${NATIVE} != true ]]; then
+return
+  fi
+
+  # Based upon HADOOP-11937
+  #
+  # Some notes:
+  #
+  # - getting fuse to compile on anything but Linux
+  #   is always tricky.
+  # - Darwin assumes homebrew is in use.
+  # - HADOOP-12027 required for bzip2 on OS X.
+  # - bzip2 is broken in lots of places.
+  #   e.g, HADOOP-12027 for OS X. so no -Drequire.bzip2
+  #
+
+  case "${OSNAME}" in
+Linux)
+  # shellcheck disable=SC2086
+  echo -Pnative -Drequire.snappy -Drequire.openssl -Drequire.fuse
+;;
+Darwin)
+  echo \
+  -Pnative -Drequire.snappy  \
+  -Drequire.openssl \
+-Dopenssl.prefix=/usr/local/opt/openssl/ \
+-Dopenssl.include=/usr/local/opt/openssl/include \
+-Dopenssl.lib=/usr/local/opt/openssl/lib
+;;
+*)
+  # shellcheck disable=SC2086
+  echo \
+-Pnative \
+-Drequire.snappy -Drequire.openssl \
+-Drequire.test.libhadoop
+;;
+  esac
+}
+
+# Function to probe the exit code of the script commands,
+# and stop in the case of failure with an contextual error
+# message.
+function run()
+{
+  declare res
+  declare logfile
+
+  echo "\$ ${*}"
+  "${@}"
+  res=$?
+  if [[ ${res} != 0 ]]; then
+echo
+echo "Failed!"
+echo
+exit "${res}"
+  fi
+}
+
+function header()
+{
+  echo
+  printf "\n\n"
+  echo 
""
+  echo 
""
+  centered_text "Hadoop Thirdparty Release Creator"
+  echo 
""
+  echo 
""
+  printf "\n\n"
+  echo "Version to create  : ${HADOOP_VERSION}"
+  echo "Release Candidate Label: ${RC_LABEL##-}"
+  echo "Source Version : $