[jira] [Created] (HADOOP-16799) Fix Hadoops AWS Integration GitHub links

2020-01-10 Thread Andrew Lane (Jira)
Andrew Lane created HADOOP-16799:


 Summary: Fix Hadoops AWS Integration GitHub links
 Key: HADOOP-16799
 URL: https://issues.apache.org/jira/browse/HADOOP-16799
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Andrew Lane


The links in the "see also" section here point to broken .html links instead of 
.md links:

[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16590) IBM Java has deprecated OS login module classes and OS principal classes.

2020-01-10 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-16590.

Fix Version/s: 3.3.0
   Resolution: Fixed

[~nmarion] Thanks for the patch.  I merged pull request 1484 to trunk.

> IBM Java has deprecated OS login module classes and OS principal classes.
> -
>
> Key: HADOOP-16590
> URL: https://issues.apache.org/jira/browse/HADOOP-16590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Nicholas Marion
>Priority: Major
> Fix For: 3.3.0
>
>
> When building applications that rely on hadoop-commons and using IBM Java, 
> errors such as `{{Exception in thread "main" java.io.IOException: failure to 
> login}}` and `{{Unable to find JAAS 
> classes:com.ibm.security.auth.LinuxPrincipal}}` can be seen.
> IBM Java has deprecated the following OS Login Module classes:
> {code:java}
> com.ibm.security.auth.module.Win64LoginModule
> com.ibm.security.auth.module.NTLoginModule
> com.ibm.security.auth.module.AIX64LoginModule
> com.ibm.security.auth.module.AIXLoginModule
> com.ibm.security.auth.module.LinuxLoginModule
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.module.JAASLoginModule{code}
> IBM Java has deprecated the following OS Principal classes:
>  
> {code:java}
> com.ibm.security.auth.UsernamePrincipal
> com.ibm.security.auth.NTUserPrincipal
> com.ibm.security.auth.AIXPrincipal
> com.ibm.security.auth.LinuxPrincipal
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.UsernamePrincipal{code}
> Older issue HADOOP-15765 has same issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16798) job commit failure in S3A MR test, executor rejected submission

2020-01-10 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16798:
---

 Summary: job commit failure in S3A MR test, executor rejected 
submission
 Key: HADOOP-16798
 URL: https://issues.apache.org/jira/browse/HADOOP-16798
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


failure in 
{code}
ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.FutureTask@6e894de2 rejected from 
org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated, 
pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
{code}

Stack implies thread pool rejected it, but toString says "Terminated". Race 
condition?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-01-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1377/

[Jan 9, 2020 2:56:51 AM] (ayushsaxena) HDFS-15096. RBF: GetServerDefaults 
Should be Cached At Router.
[Jan 9, 2020 3:32:38 AM] (ayushsaxena) HDFS-15094. RBF: Reuse ugi string in 
ConnectionPoolID. Contributed by
[Jan 9, 2020 6:34:05 AM] (surendralilhore) HDFS-14957. INodeReference Space 
Consumed was not same in QuotaUsage and
[Jan 9, 2020 7:24:58 AM] (tasanuma) HADOOP-15993. Upgrade Kafka to 2.4.0 in 
hadoop-kafka module. (#1796)
[Jan 9, 2020 5:18:44 PM] (ericp) YARN-9018. Add functionality to 
AuxiliaryLocalPathHandler to return all
[Jan 10, 2020 12:52:13 AM] (tasanuma) HDFS-15102. HttpFS: put requests are not 
supported for path "/".




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap 
   hadoop.hdfs.server.namenode.TestRedudantBlocks 
   hadoop.hdfs.TestDatanodeRegistration 
   hadoop.hdfs.TestDeadNodeDetection 
   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageF

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-01-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/

[Jan 9, 2020 3:52:26 PM] (jeagles) MAPREDUCE-7252. Handling 0 progress in 
SimpleExponential task runtime
[Jan 9, 2020 5:52:26 PM] (ericp) YARN-9018. Add functionality to 
AuxiliaryLocalPathHandler to return all




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.net.TestClusterTopology 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerNodeLabelUpdate
 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-compile-cc-root-jdk1.8.0_232.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-compile-javac-root-jdk1.8.0_232.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_232.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [164K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/563/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-li

[jira] [Resolved] (HADOOP-16722) S3GuardTool to support FilterFileSystem

2020-01-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16722.
-
Resolution: Fixed

This should be fixed in HADOOP-16697. Please check and re-open if not.

> S3GuardTool to support FilterFileSystem
> ---
>
> Key: HADOOP-16722
> URL: https://issues.apache.org/jira/browse/HADOOP-16722
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mingliang Liu
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16722.demo.patch
>
>
> Currently S3GuardTool is operating against a S3AFileSystem implementation. 
> There are cases where the {{fs.hboss.fs.s3a.impl}} is a FilterFileSystem 
> wrapping another implementation. For example, [[HBASE-23314]] made 
> {{HBaseObjectStoreSemantics}} a {{FilterFileSystem}} from {{FileSystem}}. 
> S3GuardTool could use {{FilterFileSystem::getRawFileSystem}} method to get 
> the wrapped S3AFileSystem implementation. Without this support, a simple 
> S3GuardTool against HBOSS will get confusing error like "s3a://mybucket is 
> not a S3A file system".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16697) audit/tune s3a authoritative flag in s3guard DDB Table

2020-01-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16697.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

Done. This is my last big bit of work for the 3.3.0 release

> audit/tune s3a authoritative flag in s3guard DDB Table
> --
>
> Key: HADOOP-16697
> URL: https://issues.apache.org/jira/browse/HADOOP-16697
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> S3A auth mode can cause confusion in deployments, because people expect there 
> never to be any HTTP requests to S3 in a path marked as authoritative.
> This is *not* the case when S3Guard doesn't have an entry for the path in the 
> table. Which is the state it is in when the directory was populated using 
> different tools (e.g AWS s3 command).
> Proposed
> 1. HADOOP-16684 to give more diagnostics about the bucket
> 2. add an audit command to take a path and verify that it is marked in 
> dynamoDB as authoritative *all the way down*
> This command is designed to be executed from the commandline and will return 
> different error codes based on different situations
> * path isn't guarded
> * path is not authoritative in s3a settings (dir, path)
> * path not known in table: use the 404/44 response
> * path contains 1+ dir entry which is non-auth
> 3. Use this audit after some of the bulk rename, delete, import, commit 
> (soon: upload, copy) operations to verify that's where appropriate, we do 
> update the directories. Particularly for incremental rename() where I have 
> long suspected we may have to do more there.
> 4. Review documentation and make it clear what is needed (import) after 
> uploading/Generating Data through other tools.
> I'm going to pull in the open JIRAs on this topic as they are all related



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16474) S3Guard ProgressiveRenameTracker to mark dest dir as authoritative on success

2020-01-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16474.
-
Resolution: Duplicate

> S3Guard ProgressiveRenameTracker to mark dest dir as authoritative on success
> -
>
> Key: HADOOP-16474
> URL: https://issues.apache.org/jira/browse/HADOOP-16474
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> After a directory rename is successful, the destination will contain only 
> those files which have been copied by the S3guard-enabled client, with the 
> directory tree updated as new entries are added.
> At that point, the ProgressiveRenameTracker could tell the store to complete 
> the rename and in so doing, give clients maximum performance without needing 
> any LIST commands.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] vinayakumarb commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2020-01-10 Thread GitBox
vinayakumarb commented on a change in pull request #1: HADOOP-16595. 
[pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1#discussion_r365172839
 
 

 ##
 File path: hadoop-shaded-protobuf_3_7/pom.xml
 ##
 @@ -0,0 +1,115 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+  
+hadoop-thirdparty
 
 Review comment:
   This will be just a parent pom, this will not have any content to shade. 
Also this will not be directly used by any of the projects. 
   Actual artifacts used (ex: hadoop-shaded-protobuf-3_7) will have 'shaded' 
prefix in their name.
   I this will suffice.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] brahmareddybattula commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2020-01-10 Thread GitBox
brahmareddybattula commented on a change in pull request #1: HADOOP-16595. 
[pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1#discussion_r365116836
 
 

 ##
 File path: hadoop-shaded-protobuf_3_7/pom.xml
 ##
 @@ -0,0 +1,115 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+  
+hadoop-thirdparty
 
 Review comment:
   How about changing to shaded-artifact instead of thirdparty..?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org