[jira] [Commented] (HADOOP-9851) dfs -chown does not like "+" plus sign in user name

2020-06-09 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17130099#comment-17130099
 ] 

Ayush Saxena commented on HADOOP-9851:
--

[~boky01] seems you allowed + for windows as well, I think windows doesn't 
support + in user name.
Can you check once, I checked in Linux and it allows + in user name. if not, we 
can just allow for linux and this should be good to go then.

> dfs -chown does not like "+" plus sign in user name
> ---
>
> Key: HADOOP-9851
> URL: https://issues.apache.org/jira/browse/HADOOP-9851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Marc Villacorta
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-9851.01.patch
>
>
> I intend to set user and group:
> *User:* _MYCOMPANY+marc.villacorta_
> *Group:* hadoop
> where _'+'_ is what we use as a winbind separator.
> And this is what I get:
> {code:none}
> sudo -u hdfs hadoop fs -touchz /tmp/test.txt
> sudo -u hdfs hadoop fs -chown MYCOMPANY+marc.villacorta:hadoop /tmp/test.txt
> -chown: 'MYCOMPANY+marc.villacorta:hadoop' does not match expected pattern 
> for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> {code}
> I am using version: 2.0.0-cdh4.3.0
> Quote 
> [source|http://h30097.www3.hp.com/docs/iass/OSIS_62/MAN/MAN8/0044.HTM]:
> {quote}
> winbind separator
>The winbind separator option allows you to specify how NT domain names
>and user names are combined into unix user names when presented to
>users. By default, winbindd will use the traditional '\' separator so
>that the unix user names look like DOMAIN\username. In some cases this
>separator character may cause problems as the '\' character has
>special meaning in unix shells. In that case you can use the winbind
>separator option to specify an alternative separator character. Good
>alternatives may be '/' (although that conflicts with the unix
>directory separator) or a '+ 'character. The '+' character appears to
>be the best choice for 100% compatibility with existing unix
>utilities, but may be an aesthetically bad choice depending on your
>taste.
>Default: winbind separator = \
>Example: winbind separator = +
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17059) ArrayIndexOfboundsException in ViewFileSystem#listStatus

2020-06-09 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17130079#comment-17130079
 ] 

Akira Ajisaka commented on HADOOP-17059:


It seems the network is unstable. Kicked the precommit job again: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16976/

> ArrayIndexOfboundsException in ViewFileSystem#listStatus
> 
>
> Key: HADOOP-17059
> URL: https://issues.apache.org/jira/browse/HADOOP-17059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HADOOP-17059-branch-2.10-00.patch, HADOOP-17059.001.patch
>
>
> In Viewfilesystem#listStatus , we get groupnames of ugi , If groupnames 
> doesn't exists  it will throw AIOBE
> {code:java}
> else {
>   result[i++] = new FileStatus(0, true, 0, 0,
> creationTime, creationTime, PERMISSION_555,
> ugi.getShortUserName(), ugi.getGroupNames()[0],
> new Path(inode.fullPath).makeQualified(
> myUri, null));
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on pull request #2026: HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes

2020-06-09 Thread GitBox


vinayakumarb commented on pull request #2026:
URL: https://github.com/apache/hadoop/pull/2026#issuecomment-641717069


   > I think, by mistake I had closed this PR, while replying to comment. Not 
able to re-open.
   > Will raise a new PR for this. Please review.
   Had pushed trunk itself without mychanges to source branch, so PR got closed.
   Now its back.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on pull request #2026: HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes

2020-06-09 Thread GitBox


vinayakumarb commented on pull request #2026:
URL: https://github.com/apache/hadoop/pull/2026#issuecomment-641715517


   I think, by mistake I had closed this PR, while replying to comment. Not 
able to re-open.
   Will raise a new PR for this. Please review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb closed pull request #2026: HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes

2020-06-09 Thread GitBox


vinayakumarb closed pull request #2026:
URL: https://github.com/apache/hadoop/pull/2026


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on a change in pull request #2026: HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes

2020-06-09 Thread GitBox


vinayakumarb commented on a change in pull request #2026:
URL: https://github.com/apache/hadoop/pull/2026#discussion_r437853175



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
##
@@ -74,6 +75,11 @@
 
   private static String taskName = "attempt_001_02_r03_04_05";
 
+  @After
+  public void after() throws Exception {
+cleanTokenPasswordFile();
+  }
+

Review comment:
   Pushed the latest changes without this.
   Please check





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15338) Java 11 runtime support

2020-06-09 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129854#comment-17129854
 ] 

Kihwal Lee edited comment on HADOOP-15338 at 6/9/20, 10:36 PM:
---

May be I missed it being discussed before. 
 I see illegal access warnings when I run FsShell commands.
{noformat}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.xbill.DNS.ResolverConfig  to method 
sun.net.dns.ResolverConfiguration.open()
WARNING: Please consider reporting this to the maintainers of 
org.xbill.DNS.ResolverConfig
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
{noformat}
Of course, it is just a first warning. If you set {{--illegal-access=debug}}, 
you see whole lot more. The default is still {{permit}} in jdk11 and the code 
is working despite the warning message. Are we doing anything to address these?


was (Author: kihwal):
May be I missed it being discussed before. 
I see illegal access warnings when I run FsShell commands.
{noformat}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.xbill.DNS.ResolverConfig  to method 
sun.net.dns.ResolverConfiguration.open()
WARNING: Please consider reporting this to the maintainers of 
org.xbill.DNS.ResolverConfig
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
{noformat}

Of course, it is just a first warning. If you set {{--illegal-access=debug}}, 
you see whole lot more. The default is still {{permit}} and the code is working 
despite the warning message.  Are we doing anything to address these?

> Java 11 runtime support
> ---
>
> Key: HADOOP-15338
> URL: https://issues.apache.org/jira/browse/HADOOP-15338
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Oracle JDK 8 will be EoL during January 2019, and RedHat will end support for 
> OpenJDK 8 in June 2023 ([https://access.redhat.com/articles/1299013]), so we 
> need to support Java 11 LTS at least before June 2023.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15338) Java 11 runtime support

2020-06-09 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129854#comment-17129854
 ] 

Kihwal Lee commented on HADOOP-15338:
-

May be I missed it being discussed before. 
I see illegal access warnings when I run FsShell commands.
{noformat}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.xbill.DNS.ResolverConfig  to method 
sun.net.dns.ResolverConfiguration.open()
WARNING: Please consider reporting this to the maintainers of 
org.xbill.DNS.ResolverConfig
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future release
{noformat}

Of course, it is just a first warning. If you set {{--illegal-access=debug}}, 
you see whole lot more. The default is still {{permit}} and the code is working 
despite the warning message.  Are we doing anything to address these?

> Java 11 runtime support
> ---
>
> Key: HADOOP-15338
> URL: https://issues.apache.org/jira/browse/HADOOP-15338
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Oracle JDK 8 will be EoL during January 2019, and RedHat will end support for 
> OpenJDK 8 in June 2023 ([https://access.redhat.com/articles/1299013]), so we 
> need to support Java 11 LTS at least before June 2023.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17063) S3ABlockOutputStream.putObject looks stuck and never timeout

2020-06-09 Thread Dyno (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129802#comment-17129802
 ] 

Dyno edited comment on HADOOP-17063 at 6/9/20, 9:03 PM:


it happens again i attached the jstack. thanks for looking into it.

i was trying to implement the change you have suggested but the test 
instruction does not looks quite clear. 

is it enough to run test under hadoop-tools/hadoop-aws/?


was (Author: fu):
it happens again i attached the jstack. thanks for looking into it.

i was trying to implement the change you have suggested but the test 
instruction does not looks quite clear. 

> S3ABlockOutputStream.putObject looks stuck and never timeout
> 
>
> Key: HADOOP-17063
> URL: https://issues.apache.org/jira/browse/HADOOP-17063
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.1
> Environment: hadoop 3.2.1
> spark 2.4.4
>  
>Reporter: Dyno
>Priority: Minor
> Attachments: jstack_exec-34.log, jstack_exec-40.log, 
> jstack_exec-74.log
>
>
> {code}
> sun.misc.Unsafe.park(Native Method) 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:523) 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:446)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
>  
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) 
> org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
>  org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685) 
> org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
>  
> org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
>  
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:57)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:74)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
>  
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> org.apache.spark.scheduler.Task.run(Task.scala:123) 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  java.lang.Thread.run(Thread.java:748)
> {code}
>  
> we are using spark 2.4.4 with hadoop 3.2.1 on kubernetes/spark-operator, 
> sometimes we see this hang with the stacktrace above. it looks like the 
> putObject never return, we have to kill the executor to make the job move 
> forward. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17063) S3ABlockOutputStream.putObject looks stuck and never timeout

2020-06-09 Thread Dyno (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129802#comment-17129802
 ] 

Dyno commented on HADOOP-17063:
---

it happens again i attached the jstack. thanks for looking into it.

i was trying to implement the change you have suggested but the test 
instruction does not looks quite clear. 

> S3ABlockOutputStream.putObject looks stuck and never timeout
> 
>
> Key: HADOOP-17063
> URL: https://issues.apache.org/jira/browse/HADOOP-17063
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.1
> Environment: hadoop 3.2.1
> spark 2.4.4
>  
>Reporter: Dyno
>Priority: Minor
> Attachments: jstack_exec-34.log, jstack_exec-40.log, 
> jstack_exec-74.log
>
>
> {code}
> sun.misc.Unsafe.park(Native Method) 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:523) 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:446)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
>  
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) 
> org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
>  org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685) 
> org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
>  
> org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
>  
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:57)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:74)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
>  
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> org.apache.spark.scheduler.Task.run(Task.scala:123) 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  java.lang.Thread.run(Thread.java:748)
> {code}
>  
> we are using spark 2.4.4 with hadoop 3.2.1 on kubernetes/spark-operator, 
> sometimes we see this hang with the stacktrace above. it looks like the 
> putObject never return, we have to kill the executor to make the job move 
> forward. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17063) S3ABlockOutputStream.putObject looks stuck and never timeout

2020-06-09 Thread Dyno (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dyno updated HADOOP-17063:
--
Attachment: jstack_exec-74.log
jstack_exec-40.log
jstack_exec-34.log

> S3ABlockOutputStream.putObject looks stuck and never timeout
> 
>
> Key: HADOOP-17063
> URL: https://issues.apache.org/jira/browse/HADOOP-17063
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.1
> Environment: hadoop 3.2.1
> spark 2.4.4
>  
>Reporter: Dyno
>Priority: Minor
> Attachments: jstack_exec-34.log, jstack_exec-40.log, 
> jstack_exec-74.log
>
>
> {code}
> sun.misc.Unsafe.park(Native Method) 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:523) 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:446)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
>  
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) 
> org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
>  org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685) 
> org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
>  
> org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
>  
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:57)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:74)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
>  
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> org.apache.spark.scheduler.Task.run(Task.scala:123) 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  java.lang.Thread.run(Thread.java:748)
> {code}
>  
> we are using spark 2.4.4 with hadoop 3.2.1 on kubernetes/spark-operator, 
> sometimes we see this hang with the stacktrace above. it looks like the 
> putObject never return, we have to kill the executor to make the job move 
> forward. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2055: HDFS-15393: Review of PendingReconstructionBlocks

2020-06-09 Thread GitBox


goiri commented on a change in pull request #2055:
URL: https://github.com/apache/hadoop/pull/2055#discussion_r437711091



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java
##
@@ -125,70 +128,54 @@ boolean decrement(BlockInfo block, DatanodeStorageInfo 
dn) {
*  removed
*/
   PendingBlockInfo remove(BlockInfo block) {
-synchronized (pendingReconstructions) {
-  return pendingReconstructions.remove(block);
-}
+return pendingReconstructions.remove(block);
   }
 
   public void clear() {
-synchronized (pendingReconstructions) {
   pendingReconstructions.clear();
-  synchronized (timedOutItems) {
-timedOutItems.clear();
-  }
+  timedOutItems.clear();
   timedOutCount = 0L;
-}
   }
 
   /**
* The total number of blocks that are undergoing reconstruction.
*/
   int size() {
-synchronized (pendingReconstructions) {
-  return pendingReconstructions.size();
-}
+return pendingReconstructions.size();
   }
 
   /**
* How many copies of this block is pending reconstruction?.
*/
   int getNumReplicas(BlockInfo block) {
-synchronized (pendingReconstructions) {
-  PendingBlockInfo found = pendingReconstructions.get(block);
-  if (found != null) {
-return found.getNumReplicas();
-  }
-}
-return 0;
+PendingBlockInfo found = pendingReconstructions.get(block);
+return (found == null) ? 0 : found.getNumReplicas();
   }
 
   /**
* Used for metrics.
* @return The number of timeouts
*/
   long getNumTimedOuts() {
-synchronized (timedOutItems) {
-  return timedOutCount + timedOutItems.size();
-}
+return timedOutCount + timedOutItems.size();

Review comment:
   Unlikely but this can still be technically be inconsistent, right?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java
##
@@ -296,46 +275,42 @@ void pendingReconstructionCheck() {
   public Daemon getTimerThread() {
 return timerThread;
   }
-  /*
-   * Shuts down the pending reconstruction monitor thread.
-   * Waits for the thread to exit.
+
+  /**
+   * Shuts down the pending reconstruction monitor thread. Waits for the thread
+   * to exit.
*/
   void stop() {
-fsRunning = false;
-if(timerThread == null) return;
+if (timerThread == null)
+  return;

Review comment:
   Add keys?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java
##
@@ -296,46 +275,42 @@ void pendingReconstructionCheck() {
   public Daemon getTimerThread() {
 return timerThread;
   }
-  /*
-   * Shuts down the pending reconstruction monitor thread.
-   * Waits for the thread to exit.
+
+  /**
+   * Shuts down the pending reconstruction monitor thread. Waits for the thread
+   * to exit.
*/
   void stop() {
-fsRunning = false;

Review comment:
   I can of like this. You think catching the interrupt is enough?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java
##
@@ -296,46 +275,42 @@ void pendingReconstructionCheck() {
   public Daemon getTimerThread() {
 return timerThread;
   }
-  /*
-   * Shuts down the pending reconstruction monitor thread.
-   * Waits for the thread to exit.
+
+  /**
+   * Shuts down the pending reconstruction monitor thread. Waits for the thread
+   * to exit.
*/
   void stop() {
-fsRunning = false;
-if(timerThread == null) return;
+if (timerThread == null)
+  return;
 timerThread.interrupt();
 try {
   timerThread.join(3000);
 } catch (InterruptedException ie) {
+  LOG.debug("PendingReconstructionMonitor stop is interrupted", ie);
 }
   }
 
   /**
* Iterate through all items and print them.
*/
   void metaSave(PrintWriter out) {
-synchronized (pendingReconstructions) {

Review comment:
   Why is there no need anymore?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java
##
@@ -217,20 +203,16 @@ void setTimeStamp() {
 }
 
 void incrementReplicas(DatanodeStorageInfo... newTargets) {
-  if (newTargets != null) {
-for (DatanodeStorageInfo newTarget : newTargets) {
-  if (!targets.contains(newTarget)) {
-targets.add(newTarget);
-  }
-}
+  for (DatanodeStorageInfo newTarget : newTargets) {
+targets.add(newTarget);
   }
 }
 
 void decrementReplicas(DatanodeStorageInfo dn) {
   Iterator iterator = targets.iterator();
   while (iterator.hasNext()) {
 DatanodeStorageInfo next = iterator.next();
-

[GitHub] [hadoop] ayushtkn edited a comment on pull request #2061: HADOOP-17060. listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory

2020-06-09 Thread GitBox


ayushtkn edited a comment on pull request #2061:
URL: https://github.com/apache/hadoop/pull/2061#issuecomment-641559980


   Thanx @umamaheswararao the test is really helpful. I tried it and found 
really the filestatus are different in case of HDFS as well :
   ```
   Through ListStatus 
   HdfsLocatedFileStatus{path=hdfs://localhost:42219/links/linkDir; 
isDirectory=false; length=0; replication=0; blocksize=0; 
modification_time=1591733953915; access_time=1591733953915; owner=ayush; 
group=supergroup; permission=rwxrwxrwx; isSymlink=true; 
symlink=/user/targetRegularDir; hasAcl=false; isEncrypted=false; 
isErasureCoded=false}
   
   Through getFileStatus
   HdfsLocatedFileStatus{path=hdfs://localhost:42219/user/targetRegularDir; 
isDirectory=true; modification_time=1591733953674; access_time=0; owner=ayushS; 
group=hadoop; permission=rwxr-xr-x; isSymlink=false; hasAcl=false; 
isEncrypted=false; isErasureCoded=false}
   ```
   
   >Probably we should just clarify the behavior in user guide and API docs 
about the behaviors in symlinks case? Otherwise fixing this needs to be done 
all other places and it will be incompatible change across.
   
   Yahh, Documenting this properly sounds most apt as of now.
   
   >One advantage I see with the existing behavior is that, with listStatus we 
can know dir is symlink. If one wants to know targetFs details, then issue 
GetFileStatus on that path will resolved to targetFS and gets the FileStatus at 
targetFS.
   
   Correct, provided people know about this behavior, this behavior may be 
helpful in many places . but there is probably too much lack of 
documentation/awareness around symlinks,  
   
   > we see Ambari Files View is blocked due to this for ViewFS . 
   
   If we don't fix this, Srinivasu Majeti's problem won't get solved? 
   
   But if there aren't any further different opinions, and provided HDFS is 
also behaving similarlly, we don't have much scope changing just in ViewFs, So 
documenting properly as you said should be our final option.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #2061: HADOOP-17060. listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory

2020-06-09 Thread GitBox


ayushtkn commented on pull request #2061:
URL: https://github.com/apache/hadoop/pull/2061#issuecomment-641559980


   Thanx @umamaheswararao the test is really helpful. I tried it and found 
really the filestatus are different in case of HDFS as well :
   ```
   Through ListStatus 
   HdfsLocatedFileStatus{path=hdfs://localhost:42219/links/linkDir; 
isDirectory=false; length=0; replication=0; blocksize=0; 
modification_time=1591733953915; access_time=1591733953915; owner=ayush; 
group=supergroup; permission=rwxrwxrwx; isSymlink=true; 
symlink=/user/targetRegularDir; hasAcl=false; isEncrypted=false; 
isErasureCoded=false}
   
   Through getFileStatus
   HdfsLocatedFileStatus{path=hdfs://localhost:42219/user/targetRegularDir; 
isDirectory=true; modification_time=1591733953674; access_time=0; owner=ayushS; 
group=hadoop; permission=rwxr-xr-x; isSymlink=false; hasAcl=false; 
isEncrypted=false; isErasureCoded=false}
   ```
   
   >Probably we should just clarify the behavior in user guide and API docs 
about the behaviors in symlinks case? Otherwise fixing this needs to be done 
all other places and it will be incompatible change across.
   
   Yahh, Documenting this properly sounds most apt as of now.
   
   >One advantage I see with the existing behavior is that, with listStatus we 
can know dir is symlink. If one wants to know targetFs details, then issue 
GetFileStatus on that path will resolved to targetFS and gets the FileStatus at 
targetFS.
   Correct, provided people know about this behavior, this behavior may be 
helpful in many places . but there is probably too much lack of 
documentation/awareness around symlinks,  
   
   > we see Ambari Files View is blocked due to this for ViewFS . 
   If we don't fix this, Srinivasu Majeti's problem won't get solved? 
   
   But if there aren't any further different opinions, and provided HDFS is 
also behaving similarlly, we don't have much scope changing just in ViewFs, So 
documenting properly as you said should be our final option.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel merged pull request #2054: HDFS-15386 ReplicaNotFoundException keeps happening in DN after remov…

2020-06-09 Thread GitBox


sodonnel merged pull request #2054:
URL: https://github.com/apache/hadoop/pull/2054


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on pull request #2054: HDFS-15386 ReplicaNotFoundException keeps happening in DN after remov…

2020-06-09 Thread GitBox


sodonnel commented on pull request #2054:
URL: https://github.com/apache/hadoop/pull/2054#issuecomment-641558702


   This change looks good, and even though the CI isn't running, its a minor 
change compared to what was fine on trunk. I will go ahead and merge it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17071) MiniMRYarnCluster has hard-coded timeout waiting to start history server, with no way to disable

2020-06-09 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HADOOP-17071:
--
Description: 
Over in HBase, we've been chasing intermittent Jenkins failures in tests 
involving  MiniMRYarnCluster. In the latest incantion, HBASE-24493, we finally 
tracked this down to a hard-coded 60sec timeout in MiniMRYarnCluster on 
bringing up the JobHistoryServer... a feature we cannot disable for the purpose 
of this test. We've had to disable running these tests for the time being, 
which is less than ideal.

Would be great for MiniMRYarnCluster to (1) make JHS optional and/or (2) make 
this timeout duration configurable.

  was:
Over in HBase, we've been chasing intermittent Jenkins failures in tests 
involving  MiniMRYarnCluster. In the latest incantion, HBASE-24493, we finally 
tracked this down to a hard-coded 60sec timeout in MiniMRYarnCluster on 
bringing up the JobHistoryServer... a feature we cannot disable for the purpose 
of this test. We've had to disable running these test for the time being, which 
is less than idea.

Would be great for MiniMRYarnCluster to (1) make JHS optional and/or (2) make 
this timeout duration configurable.


> MiniMRYarnCluster has hard-coded timeout waiting to start history server, 
> with no way to disable
> 
>
> Key: HADOOP-17071
> URL: https://issues.apache.org/jira/browse/HADOOP-17071
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Nick Dimiduk
>Priority: Major
>
> Over in HBase, we've been chasing intermittent Jenkins failures in tests 
> involving  MiniMRYarnCluster. In the latest incantion, HBASE-24493, we 
> finally tracked this down to a hard-coded 60sec timeout in MiniMRYarnCluster 
> on bringing up the JobHistoryServer... a feature we cannot disable for the 
> purpose of this test. We've had to disable running these tests for the time 
> being, which is less than ideal.
> Would be great for MiniMRYarnCluster to (1) make JHS optional and/or (2) make 
> this timeout duration configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17071) MiniMRYarnCluster has hard-coded timeout waiting to start history server, with no way to disable

2020-06-09 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HADOOP-17071:
-

 Summary: MiniMRYarnCluster has hard-coded timeout waiting to start 
history server, with no way to disable
 Key: HADOOP-17071
 URL: https://issues.apache.org/jira/browse/HADOOP-17071
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Reporter: Nick Dimiduk


Over in HBase, we've been chasing intermittent Jenkins failures in tests 
involving  MiniMRYarnCluster. In the latest incantion, HBASE-24493, we finally 
tracked this down to a hard-coded 60sec timeout in MiniMRYarnCluster on 
bringing up the JobHistoryServer... a feature we cannot disable for the purpose 
of this test. We've had to disable running these test for the time being, which 
is less than idea.

Would be great for MiniMRYarnCluster to (1) make JHS optional and/or (2) make 
this timeout duration configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17060) listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory

2020-06-09 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129736#comment-17129736
 ] 

Uma Maheswara Rao G edited comment on HADOOP-17060 at 6/9/20, 8:20 PM:
---

Hi [~smajeti],

Copied my PR comment here:

I have verified symlinks behavior in HDFS.

 
{code:java}
  public void testSymlinkOnHDFS() throws Exception {
    // add ur hdfs uri here: ex hdfs://10.0.1.75:9000
    URI hdfsURI = dfs.getUri();
    FileSystem.enableSymlinks();
    try (FileSystem hdfs = new DistributedFileSystem()) {
      hdfs.initialize(hdfsURI, new HdfsConfiguration());
      final Path targetLinkDir = new Path("/user", "targetRegularDir");
      hdfs.mkdirs(targetLinkDir)
      Path symLinkDir = new Path("/links/linkDir");
      hdfs.createSymlink(targetLinkDir, symLinkDir, true);
      // ListStatus Test
      FileStatus[] listStatus = hdfs.listStatus(new Path("/links"));
      FileStatus fsFromLs = listStatus[0]; // FileStatus of /links/linkDir
      Assert.assertEquals(fsFromLs.isDirectory(), false);
      Assert.assertEquals("/links/linkDir",
          Path.getPathWithoutSchemeAndAuthority(fsFromLs.getPath()).toString());
      // GetFileStatus test
      // FileStatus of /links/linkDir
      FileStatus fileStatus = hdfs.getFileStatus(symLinkDir);
      Assert.assertEquals(true, fileStatus.isDirectory());
      // resolved to FileStatus of /user/targetRegularDir
      Assert.assertEquals("/user/targetRegularDir", Path
          .getPathWithoutSchemeAndAuthority(fileStatus.getPath()).toString());
    }
  }
{code}

It turns out that the behavior of listStatus and GetFileStatus are different. 
They both returning different FileStatus. Same behavior in ViewFS also.

GetFileStatus(/test), just runs on resolved path directly. So, it will not be 
represented as symLink.
 ListStatus gets /test as children FileStatus object. But that represents as 
symLink.

Probably we should just clarify the behavior in user guide and API docs about 
the behaviors in symlinks case? Otherwise fixing this needs to be done all 
other places and it will be incompatible change across.
 One advantage I see with the existing behavior is that, with listStatus we can 
know wether dir is symlink. If one wants to know targetFs details, then issue 
GetFileStatus on that path will resolved to targetFS and gets the FileStatus at 
targetFS.
 I will also check with Sanjay on this and update here.


was (Author: umamaheswararao):
Hi [~smajeti],

Copied my PR comment here:

I have verified symlinks behavior in HDFS.

 
{code:java}
  public void testSymlinkOnHDFS() throws Exception {
    // add ur hdfs uri here: ex hdfs://10.0.1.75:9000
    URI hdfsURI = dfs.getUri();
    FileSystem.enableSymlinks();
    try (FileSystem hdfs = new DistributedFileSystem()) {
      hdfs.initialize(hdfsURI, new HdfsConfiguration());
      final Path targetLinkDir = new Path("/user", "targetRegularDir");
      hdfs.mkdirs(targetLinkDir)
      Path symLinkDir = new Path("/links/linkDir");
      hdfs.createSymlink(targetLinkDir, symLinkDir, true);
      // ListStatus Test
      FileStatus[] listStatus = hdfs.listStatus(new Path("/links"));
      FileStatus fsFromLs = listStatus[0]; // FileStatus of /links/linkDir
      Assert.assertEquals(fsFromLs.isDirectory(), false);
      Assert.assertEquals("/links/linkDir",
          Path.getPathWithoutSchemeAndAuthority(fsFromLs.getPath()).toString());
      // GetFileStatus test
      // FileStatus of /links/linkDir
      FileStatus fileStatus = hdfs.getFileStatus(symLinkDir);
      Assert.assertEquals(true, fileStatus.isDirectory());
      // resolved to FileStatus of /user/targetRegularDir
      Assert.assertEquals("/user/targetRegularDir", Path
          .getPathWithoutSchemeAndAuthority(fileStatus.getPath()).toString());
    }
  }
{code}
 

{{}}

It turns out that the behavior of listStatus and GetFileStatus are different. 
They both returning different FileStatus. Same behavior in ViewFS also.

GetFileStatus(/test), just runs on resolved path directly. So, it will not be 
represented as symLink.
ListStatus(/) of gets /test as children FileStatus object. But that represents 
as symLink.

Probably we should just clarify the behavior in user guide and API docs about 
the behaviors in symlinks case? Otherwise fixing this needs to be done all 
other places and it will be incompatible change across.
One advantage I see with the existing behavior is that, with listStatus we can 
know wether dir is symlink. If one wants to know targetFs details, then issue 
GetFileStatus on that path will resolved to targetFS and gets the FileStatus at 
targetFS.
I will also check with Sanjay on this and update here.

> listStatus and getFileStatus behave inconsistent in the case of ViewFs 
> implementation for isDirectory
> -
>
> 

[jira] [Commented] (HADOOP-17029) ViewFS does not return correct user/group and ACL

2020-06-09 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129762#comment-17129762
 ] 

Uma Maheswara Rao G commented on HADOOP-17029:
--

[~abhishekd], I have reopened this issue to discuss further. 

As we are representing mount link as symlinks in ViewFS I am just wondering 
whether we should leave it to show the link permissions only? Here link 
permissions in ViewFS is kind of static as link will be updated from xml 
configurations.

When we check on linux or mac symlinks, link permissions are not changed if you 
change targetDir permissions. That means, it's not showing any targetDir 
permissions on link.

 
{code:java}
lrwxr-xr-x   1 umagangumalla  own   6 Jun  6 15:54 srcLink -> target
drwxrwxrwx   3 umagangumalla  own  96 Jun  8 11:05 target
{code}
 

However, GetFileStatus will get resolved path FileStatus so, we can get target 
directory permissions in there. 

How does this impact in your scenarios? Looks like this behavior is same in 
HDFS symlink case as well.

 
{code:java}
public void testPErmissionsWithSymlinksOnHDFS() throws Exception {
    // add ur hdfs uri here: ex hdfs://10.0.1.75:9000
    URI hdfsURI = dfs.getUri();
    FileSystem.enableSymlinks();
    try (FileSystem hdfs = new DistributedFileSystem()) {
      hdfs.initialize(hdfsURI, new HdfsConfiguration());
      final Path targetLinkDir = new Path("/user", "targetRegularDir");
      hdfs.mkdirs(targetLinkDir);
      Path symLinkDir = new Path("/links/linkDir");
      hdfs.createSymlink(targetLinkDir, symLinkDir, true);
      // ListStatus Test
      FileStatus[] listStatus = hdfs.listStatus(new Path("/links"));
      FileStatus fsFromLs = listStatus[0]; // FileStatus of /links/linkDir
      Assert.assertEquals(fsFromLs.isDirectory(), false);
      Assert.assertEquals(symLinkDir,
          Path.getPathWithoutSchemeAndAuthority(fsFromLs.getPath()));
      Assert.assertEquals(FsPermission.valueOf("-rwxrwxrwx"),
          fsFromLs.getPermission());
      
 // Change permissions on target
      hdfs.setPermission(targetLinkDir, FsPermission.valueOf("-rw-rw-rw-"));
      listStatus = hdfs.listStatus(new Path("/links"));
      fsFromLs = listStatus[0]; // FileStatus of /links/linkDir
      Assert.assertEquals(fsFromLs.isDirectory(), false);
      Assert.assertEquals(symLinkDir,
          Path.getPathWithoutSchemeAndAuthority(fsFromLs.getPath()));
      Assert.assertEquals(FsPermission.valueOf("-rwxrwxrwx"),
          fsFromLs.getPermission());
      // GetFileStatus test
      // FileStatus of /links/linkDir
      FileStatus fileStatus = hdfs.getFileStatus(symLinkDir);
      Assert.assertEquals(true, fileStatus.isDirectory());
      // resolved to FileStatus of /user/targetRegularDir
      Assert.assertEquals(targetLinkDir,
          Path.getPathWithoutSchemeAndAuthority(fileStatus.getPath()));
      Assert.assertEquals(FsPermission.valueOf("-rw-rw-rw-"),
          fileStatus.getPermission());
    }
{code}
 

 

> ViewFS does not return correct user/group and ACL
> -
>
> Key: HADOOP-17029
> URL: https://issues.apache.org/jira/browse/HADOOP-17029
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Reporter: Abhishek Das
>Assignee: Abhishek Das
>Priority: Major
>
> When doing ls on a mount point parent, the returned user/group ACL is 
> incorrect. It always showing the user and group being current user, with some 
> arbitrary ACL. Which could misleading any application depending on this API.
> cc [~cliang] [~virajith] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17029) ViewFS does not return correct user/group and ACL

2020-06-09 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G reopened HADOOP-17029:
--

> ViewFS does not return correct user/group and ACL
> -
>
> Key: HADOOP-17029
> URL: https://issues.apache.org/jira/browse/HADOOP-17029
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Reporter: Abhishek Das
>Assignee: Abhishek Das
>Priority: Major
>
> When doing ls on a mount point parent, the returned user/group ACL is 
> incorrect. It always showing the user and group being current user, with some 
> arbitrary ACL. Which could misleading any application depending on this API.
> cc [~cliang] [~virajith] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17060) listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory

2020-06-09 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129736#comment-17129736
 ] 

Uma Maheswara Rao G commented on HADOOP-17060:
--

Hi [~smajeti],

Copied my PR comment here:

I have verified symlinks behavior in HDFS.

 
{code:java}
  public void testSymlinkOnHDFS() throws Exception {
    // add ur hdfs uri here: ex hdfs://10.0.1.75:9000
    URI hdfsURI = dfs.getUri();
    FileSystem.enableSymlinks();
    try (FileSystem hdfs = new DistributedFileSystem()) {
      hdfs.initialize(hdfsURI, new HdfsConfiguration());
      final Path targetLinkDir = new Path("/user", "targetRegularDir");
      hdfs.mkdirs(targetLinkDir)
      Path symLinkDir = new Path("/links/linkDir");
      hdfs.createSymlink(targetLinkDir, symLinkDir, true);
      // ListStatus Test
      FileStatus[] listStatus = hdfs.listStatus(new Path("/links"));
      FileStatus fsFromLs = listStatus[0]; // FileStatus of /links/linkDir
      Assert.assertEquals(fsFromLs.isDirectory(), false);
      Assert.assertEquals("/links/linkDir",
          Path.getPathWithoutSchemeAndAuthority(fsFromLs.getPath()).toString());
      // GetFileStatus test
      // FileStatus of /links/linkDir
      FileStatus fileStatus = hdfs.getFileStatus(symLinkDir);
      Assert.assertEquals(true, fileStatus.isDirectory());
      // resolved to FileStatus of /user/targetRegularDir
      Assert.assertEquals("/user/targetRegularDir", Path
          .getPathWithoutSchemeAndAuthority(fileStatus.getPath()).toString());
    }
  }
{code}
 

{{}}

It turns out that the behavior of listStatus and GetFileStatus are different. 
They both returning different FileStatus. Same behavior in ViewFS also.

GetFileStatus(/test), just runs on resolved path directly. So, it will not be 
represented as symLink.
ListStatus(/) of gets /test as children FileStatus object. But that represents 
as symLink.

Probably we should just clarify the behavior in user guide and API docs about 
the behaviors in symlinks case? Otherwise fixing this needs to be done all 
other places and it will be incompatible change across.
One advantage I see with the existing behavior is that, with listStatus we can 
know wether dir is symlink. If one wants to know targetFs details, then issue 
GetFileStatus on that path will resolved to targetFS and gets the FileStatus at 
targetFS.
I will also check with Sanjay on this and update here.

> listStatus and getFileStatus behave inconsistent in the case of ViewFs 
> implementation for isDirectory
> -
>
> Key: HADOOP-17060
> URL: https://issues.apache.org/jira/browse/HADOOP-17060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Srinivasu Majeti
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: viewfs
>
> listStatus implementation in ViewFs and getFileStatus does not return 
> consistent values for an element on isDirectory value. listStatus returns 
> isDirectory of all softlinks as false and getFileStatus returns isDirectory 
> as true.
> {code:java}
> [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop 
> classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus "/"
> FileStatus of viewfs://c3121/testme21may isDirectory:false
> FileStatus of viewfs://c3121/tmp isDirectory:false
> FileStatus of viewfs://c3121/foo isDirectory:false
> FileStatus of viewfs://c3121/tmp21may isDirectory:false
> FileStatus of viewfs://c3121/testme isDirectory:false
> FileStatus of viewfs://c3121/testme2 isDirectory:false <--- returns false
> FileStatus of / isDirectory:true
> [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop 
> classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus /testme2
> FileStatus of viewfs://c3121/testme2/dist-copynativelibs.sh isDirectory:false
> FileStatus of viewfs://c3121/testme2/newfolder isDirectory:true
> FileStatus of /testme2 isDirectory:true <--- returns true
> [hdfs@c3121-node2 ~]$ {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2061: HADOOP-17060. listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory

2020-06-09 Thread GitBox


umamaheswararao commented on a change in pull request #2061:
URL: https://github.com/apache/hadoop/pull/2061#discussion_r437656020



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -1202,19 +1202,18 @@ public FileStatus getFileStatus(Path f) throws 
IOException {
   INodeLink link = (INodeLink) inode;
   try {
 String linkedPath = link.getTargetFileSystem().getUri().getPath();
-if("".equals(linkedPath)) {
+if ("".equals(linkedPath)) {
   linkedPath = "/";
 }
 FileStatus status =
-((ChRootedFileSystem)link.getTargetFileSystem())
-.getMyFs().getFileStatus(new Path(linkedPath));
-result[i++] = new FileStatus(status.getLen(), false,
-  status.getReplication(), status.getBlockSize(),
-  status.getModificationTime(), status.getAccessTime(),
-  status.getPermission(), status.getOwner(), status.getGroup(),
-  link.getTargetLink(),
-  new Path(inode.fullPath).makeQualified(
-  myUri, null));
+((ChRootedFileSystem) link.getTargetFileSystem()).getMyFs()
+.getFileStatus(new Path(linkedPath));
+result[i++] = new FileStatus(status.getLen(), status.isDirectory(),
+status.getReplication(), status.getBlockSize(),
+status.getModificationTime(), status.getAccessTime(),
+status.getPermission(), status.getOwner(), status.getGroup(),
+link.getTargetLink(),
+new Path(inode.fullPath).makeQualified(myUri, null));

Review comment:
   @ayushtkn 
   
   I have verified symlinks behavior in HDFS.
   
   ```
   public void testSymlinkOnHDFS() throws Exception {
   // add ur hdfs uri here: ex hdfs://10.0.1.75:9000
   URI hdfsURI = dfs.getUri();
   FileSystem.enableSymlinks();
   try (FileSystem hdfs = new DistributedFileSystem()) {
 hdfs.initialize(hdfsURI, new HdfsConfiguration());
 final Path targetLinkDir = new Path("/user", "targetRegularDir");
 hdfs.mkdirs(targetLinkDir);
   
 Path symLinkDir = new Path("/links/linkDir");
 hdfs.createSymlink(targetLinkDir, symLinkDir, true);
   
 // ListStatus Test
 FileStatus[] listStatus = hdfs.listStatus(new Path("/links"));
 FileStatus fsFromLs = listStatus[0]; // FileStatus of /links/linkDir
 Assert.assertEquals(fsFromLs.isDirectory(), false);
 Assert.assertEquals("/links/linkDir",
 
Path.getPathWithoutSchemeAndAuthority(fsFromLs.getPath()).toString());
   
 // GetFileStatus test
 // FileStatus of /links/linkDir
 FileStatus fileStatus = hdfs.getFileStatus(symLinkDir);
 Assert.assertEquals(true, fileStatus.isDirectory());
 // resolved to FileStatus of /user/targetRegularDir
 Assert.assertEquals("/user/targetRegularDir", Path
 
.getPathWithoutSchemeAndAuthority(fileStatus.getPath()).toString());
   }
 }
   ```
   It turns out that the behavior of listStatus  and GetFileStatus are 
different. They both returning different FileStatus. Same behavior in ViewFS 
also.
   
   GetFileStatus(/test), just runs on resolved path directly. So, it will not 
be represented as symLink.
   ListStatus(/) of gets /test as children FileStatus object. But that 
represents as symLink.
   
   Probably we should just clarify the behavior in user guide and API docs 
about the behaviors in symlinks case?  Otherwise fixing this needs to be done 
all other places and it will be incompatible change across. 
   One advantage I see with the existing behavior is that, with listStatus we 
can know dir is symlink. If one wants to know targetFs details, then issue 
GetFileStatus on that path will resolved to targetFS and gets the FileStatus at 
targetFS.
   I will also check with Sanjay about his opinions on this.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #1982: HADOOP-16830. IOStatistics API.

2020-06-09 Thread GitBox


hadoop-yetus commented on pull request #1982:
URL: https://github.com/apache/hadoop/pull/1982#issuecomment-641510854


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  6s |  https://github.com/apache/hadoop/pull/1982 
does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1982 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/11/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on pull request #2054: HDFS-15386 ReplicaNotFoundException keeps happening in DN after remov…

2020-06-09 Thread GitBox


sodonnel commented on pull request #2054:
URL: https://github.com/apache/hadoop/pull/2054#issuecomment-641466758


   The PR branch compiles fine locally and the new tests pass OK (all tests in 
TestFsDataSetImpl pass). We also know the CI run was clean on trunk.
   
   Let me check with some people if we are good to just push this change in. We 
don't run CI on all the 3.x branchs after backporting from trunk, and the 
change here from 3.x is fairly minor.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2063: HADOOP-17020. RawFileSystem could localize default block size to avoid sync bottleneck in config

2020-06-09 Thread GitBox


hadoop-yetus commented on pull request #2063:
URL: https://github.com/apache/hadoop/pull/2063#issuecomment-640611883


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 15s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  26m 20s |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 49s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 32s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   2m 47s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 44s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |  22m 14s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 18s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   2m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  1s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 52s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 161m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2063/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2063 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f5a67e909337 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a8610c15c49 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2063/1/testReport/ |
   | Max. process+thread count | 1631 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2063/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2063: HADOOP-17020. RawFileSystem could localize default block size to avoid sync bottleneck in config

2020-06-09 Thread GitBox


steveloughran commented on a change in pull request #2063:
URL: https://github.com/apache/hadoop/pull/2063#discussion_r437295343



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
##
@@ -100,6 +101,7 @@ public File pathToFile(Path path) {
   public void initialize(URI uri, Configuration conf) throws IOException {
 super.initialize(uri, conf);
 setConf(conf);
+defaultBlockSize = getDefaultBlockSize(new Path("."));

Review comment:
   must be absolute fs uri

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
##
@@ -518,7 +520,12 @@ public boolean delete(Path p, boolean recursive) throws 
IOException {
 }
 return new FileStatus[] {
 new DeprecatedRawLocalFileStatus(localf,
-getDefaultBlockSize(f), this) };
+defaultBlockSize, this) };
+  }
+
+  @Override
+  public boolean exists(Path f) throws IOException {
+return pathToFile(f).exists();

Review comment:
   is this a new method, or just an accidental patch quirk





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #1982: HADOOP-16830. IOStatistics API.

2020-06-09 Thread GitBox


steveloughran commented on pull request #1982:
URL: https://github.com/apache/hadoop/pull/1982#issuecomment-641182474


   checkstyle. I intend to ignore those about _1, _2 and _3 methods as they 
match scala's; I plan to soon add tuple/triple classes with these to hadoop 
utils



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2064: HADOOP-17069 change none default keystore password to nopass.

2020-06-09 Thread GitBox


hadoop-yetus commented on pull request #2064:
URL: https://github.com/apache/hadoop/pull/2064#issuecomment-640797422







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #2043: HADOOP-17050 S3A to support multiple DTs

2020-06-09 Thread GitBox


steveloughran merged pull request #2043:
URL: https://github.com/apache/hadoop/pull/2043


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #1861: HADOOP-13230. Optionally retain directory markers

2020-06-09 Thread GitBox


hadoop-yetus commented on pull request #1861:
URL: https://github.com/apache/hadoop/pull/1861#issuecomment-640569965


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 23s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
13 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 28s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 17s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 29s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   3m 25s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 18s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 50s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 17s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 32s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 32s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 54s |  root: The patch generated 33 new 
+ 64 unchanged - 1 fixed = 97 total (was 65)  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 17s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  the patch passed  |
   | -1 :x: |  findbugs  |   1m 18s |  hadoop-tools/hadoop-aws generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 32s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 22s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 142m 43s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Dead store to leafMarkers in 
org.apache.hadoop.fs.s3a.tools.MarkerTool.scan(Path, boolean, int, boolean, 
StoreContext, OperationCallbacks)  At 
MarkerTool.java:org.apache.hadoop.fs.s3a.tools.MarkerTool.scan(Path, boolean, 
int, boolean, StoreContext, OperationCallbacks)  At MarkerTool.java:[line 187] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1861 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 190cc53f2ddb 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a8610c15c49 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/17/artifact/out/diff-checkstyle-root.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/17/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/17/testReport/ |
   | Max. process+thread count | 3298 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1861/17/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hadoop] hadoop-yetus commented on pull request #2054: HDFS-15386 ReplicaNotFoundException keeps happening in DN after remov…

2020-06-09 Thread GitBox


hadoop-yetus commented on pull request #2054:
URL: https://github.com/apache/hadoop/pull/2054#issuecomment-640963645


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  11m  7s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-2.10 Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 16s |  root in branch-2.10 failed.  |
   | -1 :x: |  compile  |   0m 12s |  hadoop-hdfs in branch-2.10 failed.  |
   | -0 :warning: |  checkstyle  |   0m 12s |  The patch fails to run 
checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 11s |  hadoop-hdfs in branch-2.10 failed.  |
   | -1 :x: |  javadoc  |   0m 11s |  hadoop-hdfs in branch-2.10 failed.  |
   | +0 :ok: |  spotbugs  |   0m 52s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 12s |  hadoop-hdfs in branch-2.10 failed.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 11s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  javac  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 11s |  The patch fails to run 
checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  javadoc  |   0m 13s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 16s |  ASF License check generated no 
output?  |
   |  |   |  16m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2054 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 406910e1b801 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | branch-2.10 / 14ff617 |
   | Default Java | Oracle Corporation-1.7.0_95-b00 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/buildtool-patch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2054/5/testReport/ |
 

[GitHub] [hadoop] aajisaka commented on a change in pull request #2026: HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes

2020-06-09 Thread GitBox


aajisaka commented on a change in pull request #2026:
URL: https://github.com/apache/hadoop/pull/2026#discussion_r437242388



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
##
@@ -74,6 +75,11 @@
 
   private static String taskName = "attempt_001_02_r03_04_05";
 
+  @After
+  public void after() throws Exception {
+cleanTokenPasswordFile();
+  }
+

Review comment:
   Big +1 for this change, but I think this issue can be fixed in a 
separate jira.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2043: HADOOP-17050 S3A to support multiple DTs

2020-06-09 Thread GitBox


steveloughran commented on pull request #2043:
URL: https://github.com/apache/hadoop/pull/2043#issuecomment-641179089


   thx. will fix the javadocs then commit
   
   ```
   [WARNING] 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2043/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/S3ADelegationTokens.java:459:
 warning - Tag @link: missing '#': 
"DelegationTokenIssuer.collectDelegationTokens()"
   [WARNING] 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2043/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/S3ADelegationTokens.java:459:
 warning - Tag @link: can't find 
DelegationTokenIssuer.collectDelegationTokens() in 
org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2055: HDFS-15393: Review of PendingReconstructionBlocks

2020-06-09 Thread GitBox


hadoop-yetus commented on pull request #2055:
URL: https://github.com/apache/hadoop/pull/2055#issuecomment-640930858


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 31s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 37s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  0s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  3s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 43s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 23 new + 126 unchanged - 3 fixed = 149 total (was 129)  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 26s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 115m 34s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 187m 45s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2055/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2055 |
   | JIRA Issue | HDFS-15393 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a54ce8b9c072 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0c25131ca43 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2055/2/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2055/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2055/2/testReport/ |
   | Max. process+thread count | 2970 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2055/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, 

[GitHub] [hadoop] ayushtkn commented on a change in pull request #2061: HADOOP-17060. listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory

2020-06-09 Thread GitBox


ayushtkn commented on a change in pull request #2061:
URL: https://github.com/apache/hadoop/pull/2061#discussion_r436837228



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -1202,19 +1202,18 @@ public FileStatus getFileStatus(Path f) throws 
IOException {
   INodeLink link = (INodeLink) inode;
   try {
 String linkedPath = link.getTargetFileSystem().getUri().getPath();
-if("".equals(linkedPath)) {
+if ("".equals(linkedPath)) {
   linkedPath = "/";
 }
 FileStatus status =
-((ChRootedFileSystem)link.getTargetFileSystem())
-.getMyFs().getFileStatus(new Path(linkedPath));
-result[i++] = new FileStatus(status.getLen(), false,
-  status.getReplication(), status.getBlockSize(),
-  status.getModificationTime(), status.getAccessTime(),
-  status.getPermission(), status.getOwner(), status.getGroup(),
-  link.getTargetLink(),
-  new Path(inode.fullPath).makeQualified(
-  myUri, null));
+((ChRootedFileSystem) link.getTargetFileSystem()).getMyFs()
+.getFileStatus(new Path(linkedPath));
+result[i++] = new FileStatus(status.getLen(), status.isDirectory(),
+status.getReplication(), status.getBlockSize(),
+status.getModificationTime(), status.getAccessTime(),
+status.getPermission(), status.getOwner(), status.getGroup(),
+link.getTargetLink(),
+new Path(inode.fullPath).makeQualified(myUri, null));

Review comment:
   Well things are different at different places and TBH I don't have a 
strong opinion on which is the best way to do.
   On a personal note changing `getFileStatus()` seems to be little more safe 
to me, As those assertions and stuffs stays as it is and changes gets 
restricted to `viewFS` only and no changes to links interpretations and stuffs. 
(my assumption, it should be safe, haven't digged in much)
   
   ```
 return new FileStatus(0, true, 0, 0, creationTime, creationTime,
 PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(),
   
 new Path(theInternalDir.fullPath).makeQualified(
 myUri, ROOT_PATH));
   ```
   `getFileStatus()` is treating it as a link only(but with isDir true), it 
doesn't shows target file system permissions and times. That also need to be 
changed similarly to  HADOOP-17029, to resolve permissions and stuff from 
target file system. FileStatus should be same through both API's? We can make 
things get in sync by doing that, and get away with inconsistencies b/w these 
two API's as of now..
   
   Changes in `getListing()` apart from making the API`s in sync(That also we 
need to change in getFileStatus() as well since there 'true' is hardcoded), we 
seems to change symlink interpretation logics as well to get that in sync with 
other systems and I think that might break things for people relying on checks 
like this : `if (isDir==false and link!=null)`, May be we can have a follow up 
JIRA as well, to change link interpretations with bigger audience.
   
   But in any case, I don't have oppositions to any of the approach, all up to 
you whichever way you want to go ahead. :-) 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2026: HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes

2020-06-09 Thread GitBox


aajisaka commented on pull request #2026:
URL: https://github.com/apache/hadoop/pull/2026#issuecomment-641193896


   Now I'm +1. Hi @jojochuang could you review this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #1982: HADOOP-16830. IOStatistics API.

2020-06-09 Thread GitBox


hadoop-yetus commented on pull request #1982:
URL: https://github.com/apache/hadoop/pull/1982#issuecomment-640902616


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
23 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 37s |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 18s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 58s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  9s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 23s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 18s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 26s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  1s |  the patch passed  |
   | -1 :x: |  javac  |  18m  1s |  root generated 1 new + 1862 unchanged - 1 
fixed = 1863 total (was 1863)  |
   | -0 :warning: |  checkstyle  |   2m 57s |  root: The patch generated 20 new 
+ 160 unchanged - 22 fixed = 180 total (was 182)  |
   | +1 :green_heart: |  mvnsite  |   2m  7s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 8 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 50s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 26s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 129m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1982 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint xml |
   | uname | Linux 039c4101d198 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0c25131ca43 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/10/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/10/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/10/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/10/testReport/ |
   | Max. process+thread count | 3242 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1982/10/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hadoop] ScrapCodes closed pull request #2064: HADOOP-17069 change none default keystore password to nopass.

2020-06-09 Thread GitBox


ScrapCodes closed pull request #2064:
URL: https://github.com/apache/hadoop/pull/2064


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2063: HADOOP-17020. RawFileSystem could localize default block size to avoid sync bottleneck in config

2020-06-09 Thread GitBox


steveloughran commented on pull request #2063:
URL: https://github.com/apache/hadoop/pull/2063#issuecomment-641184637


   relates to #2002



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ScrapCodes opened a new pull request #2064: HADOOP-17069 change none default keystore password to nopass.

2020-06-09 Thread GitBox


ScrapCodes opened a new pull request #2064:
URL: https://github.com/apache/hadoop/pull/2064


   Since, the java keytool does not allow us to create a keystore with password 
length less than 6 characters(i.e. none), we should consider updating the 
password to a 6 char length (e.g. nopass).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2061: HADOOP-17060. listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory

2020-06-09 Thread GitBox


umamaheswararao commented on a change in pull request #2061:
URL: https://github.com/apache/hadoop/pull/2061#discussion_r436770666



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -1202,19 +1202,18 @@ public FileStatus getFileStatus(Path f) throws 
IOException {
   INodeLink link = (INodeLink) inode;
   try {
 String linkedPath = link.getTargetFileSystem().getUri().getPath();
-if("".equals(linkedPath)) {
+if ("".equals(linkedPath)) {
   linkedPath = "/";
 }
 FileStatus status =
-((ChRootedFileSystem)link.getTargetFileSystem())
-.getMyFs().getFileStatus(new Path(linkedPath));
-result[i++] = new FileStatus(status.getLen(), false,
-  status.getReplication(), status.getBlockSize(),
-  status.getModificationTime(), status.getAccessTime(),
-  status.getPermission(), status.getOwner(), status.getGroup(),
-  link.getTargetLink(),
-  new Path(inode.fullPath).makeQualified(
-  myUri, null));
+((ChRootedFileSystem) link.getTargetFileSystem()).getMyFs()
+.getFileStatus(new Path(linkedPath));
+result[i++] = new FileStatus(status.getLen(), status.isDirectory(),
+status.getReplication(), status.getBlockSize(),
+status.getModificationTime(), status.getAccessTime(),
+status.getPermission(), status.getOwner(), status.getGroup(),
+link.getTargetLink(),
+new Path(inode.fullPath).makeQualified(myUri, null));

Review comment:
   
   ```
   isDir==true --> It is a Directory
   isDir==false --> Can be a file or Symlink. So to conclude further whether a 
file or link
   isDir==false and link==null --> it is file
   isDir==false and link!=null --> it is a symlink
   ```
   Here nio Files APIs return true for isDirectory API. But here we cannot make 
that judgement with this information. I saw that 'l' part along with directory. 
However, native filesystems seems to be capturing the info about target 
filesystem and return isDir true based on that. It denotes as symlink along 
with permission bits.
   My original thought was to change GetFileStatus see 
[comment](https://issues.apache.org/jira/browse/HADOOP-17060?focusedCommentId=17113760=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17113760)
 as you thought here: 
   But after verifying tests on local mac, I realized isDirectory is getting 
returned tru in that cases, but here we cannot make that decision. in MAC it 
was showing as folder icon if target is a directory and isDirectory as true.
   
   
   

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -1202,19 +1202,18 @@ public FileStatus getFileStatus(Path f) throws 
IOException {
   INodeLink link = (INodeLink) inode;
   try {
 String linkedPath = link.getTargetFileSystem().getUri().getPath();
-if("".equals(linkedPath)) {
+if ("".equals(linkedPath)) {
   linkedPath = "/";
 }
 FileStatus status =
-((ChRootedFileSystem)link.getTargetFileSystem())
-.getMyFs().getFileStatus(new Path(linkedPath));
-result[i++] = new FileStatus(status.getLen(), false,
-  status.getReplication(), status.getBlockSize(),
-  status.getModificationTime(), status.getAccessTime(),
-  status.getPermission(), status.getOwner(), status.getGroup(),
-  link.getTargetLink(),
-  new Path(inode.fullPath).makeQualified(
-  myUri, null));
+((ChRootedFileSystem) link.getTargetFileSystem()).getMyFs()
+.getFileStatus(new Path(linkedPath));
+result[i++] = new FileStatus(status.getLen(), status.isDirectory(),
+status.getReplication(), status.getBlockSize(),
+status.getModificationTime(), status.getAccessTime(),
+status.getPermission(), status.getOwner(), status.getGroup(),
+link.getTargetLink(),
+new Path(inode.fullPath).makeQualified(myUri, null));

Review comment:
   ```
   isDir==true --> It is a Directory
   isDir==false --> Can be a file or Symlink. So to conclude further whether a 
file or link
   isDir==false and link==null --> it is file
   isDir==false and link!=null --> it is a symlink
   ```
   Here nio Files APIs return true for isDirectory API. But here we cannot make 
that judgement with this information. I saw that 'l' part along with directory. 
However, native filesystems seems to be capturing the info about target 
filesystem and return isDir true based on that. It denotes as 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2043: HADOOP-17050 S3A to support multiple DTs

2020-06-09 Thread GitBox


hadoop-yetus commented on pull request #2043:
URL: https://github.com/apache/hadoop/pull/2043#issuecomment-641226351







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17050) S3A to support additional token issuers

2020-06-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129302#comment-17129302
 ] 

Hudson commented on HADOOP-17050:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18341 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18341/])
HADOOP-17050 S3A to support additional token issuers (github: rev 
ac5d899d40d7b50ba73c400a708f59fb128e6e30)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/S3ADelegationTokens.java


> S3A to support additional token issuers
> ---
>
> Key: HADOOP-17050
> URL: https://issues.apache.org/jira/browse/HADOOP-17050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.1
>
>
> In 
> {{org.apache.hadoop.fs.s3a.auth.delegation.AbstractDelegationTokenBinding}} 
> the {{createDelegationToken}} should return a list of tokens.
> With this functionality, the {{AbstractDelegationTokenBinding}} can get two 
> different tokens at the same time.
> {{AbstractDelegationTokenBinding.TokenSecretManager}} should be extended to 
> retrieve secrets and lookup delegation tokens (use the public API for 
> secretmanager in hadoop)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17050) S3A to support additional token issuers

2020-06-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17050:

Priority: Minor  (was: Major)

> S3A to support additional token issuers
> ---
>
> Key: HADOOP-17050
> URL: https://issues.apache.org/jira/browse/HADOOP-17050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.1
>
>
> In 
> {{org.apache.hadoop.fs.s3a.auth.delegation.AbstractDelegationTokenBinding}} 
> the {{createDelegationToken}} should return a list of tokens.
> With this functionality, the {{AbstractDelegationTokenBinding}} can get two 
> different tokens at the same time.
> {{AbstractDelegationTokenBinding.TokenSecretManager}} should be extended to 
> retrieve secrets and lookup delegation tokens (use the public API for 
> secretmanager in hadoop)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17050) S3A to support additional token issuers

2020-06-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17050:

Fix Version/s: 3.3.1

> S3A to support additional token issuers
> ---
>
> Key: HADOOP-17050
> URL: https://issues.apache.org/jira/browse/HADOOP-17050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.1
>
>
> In 
> {{org.apache.hadoop.fs.s3a.auth.delegation.AbstractDelegationTokenBinding}} 
> the {{createDelegationToken}} should return a list of tokens.
> With this functionality, the {{AbstractDelegationTokenBinding}} can get two 
> different tokens at the same time.
> {{AbstractDelegationTokenBinding.TokenSecretManager}} should be extended to 
> retrieve secrets and lookup delegation tokens (use the public API for 
> secretmanager in hadoop)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17050) S3A to support additional token issuers

2020-06-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17050:

Summary: S3A to support additional token issuers  (was: Add support for 
multiple delegation tokens in S3AFilesystem)

> S3A to support additional token issuers
> ---
>
> Key: HADOOP-17050
> URL: https://issues.apache.org/jira/browse/HADOOP-17050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Steve Loughran
>Priority: Major
>
> In 
> {{org.apache.hadoop.fs.s3a.auth.delegation.AbstractDelegationTokenBinding}} 
> the {{createDelegationToken}} should return a list of tokens.
> With this functionality, the {{AbstractDelegationTokenBinding}} can get two 
> different tokens at the same time.
> {{AbstractDelegationTokenBinding.TokenSecretManager}} should be extended to 
> retrieve secrets and lookup delegation tokens (use the public API for 
> secretmanager in hadoop)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17069) The default password(none) in JavaKeyStoreProvider, is no-longer useful.

2020-06-09 Thread Prashant Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prashant Sharma resolved HADOOP-17069.
--
Resolution: Invalid

Resolving as invalid, as I realised that keystore has to be created by hadoop 
tools only and not with keytool. Documentation does not say anything about 
this, maybe a note can be added.

> The default password(none) in JavaKeyStoreProvider, is no-longer useful. 
> -
>
> Key: HADOOP-17069
> URL: https://issues.apache.org/jira/browse/HADOOP-17069
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Prashant Sharma
>Priority: Major
>
> Since, the java keytool does not allow us to create a keystore with password 
> length less than 6 characters(i.e. none), we should consider updating the 
> password to a 6 char length (e.g. nopass).
> {code}
> $ keytool -genkeypair -storetype jceks -keyalg RSA -alias kms -keystore 
> `pwd`/keystore4 -storepass none
> keytool error: java.lang.Exception: Keystore password must be at least 6 
> characters
> $ java -version
> openjdk version "1.8.0_252"
> OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1ubuntu1-b09)
> OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16202) S3A openFile() operation to support explicit length parameter

2020-06-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129068#comment-17129068
 ] 

Steve Loughran commented on HADOOP-16202:
-

+I want to make the length option an fs.opt one rather than just fs.s3a. so 
that apps which set it don't need to know about which filesystems support the 
feature.

> S3A openFile() operation to support explicit length parameter
> -
>
> Key: HADOOP-16202
> URL: https://issues.apache.org/jira/browse/HADOOP-16202
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> The {{openFile()}} builder API lets us add new options when reading a file
> Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows 
> the length of the file to be declared. If set, *no check for the existence of 
> the file is issued when opening the file*
> Also: withFileStatus() to take any FileStatus implementation, rather than 
> only S3AFileStatus -and not check that the path matches the path being 
> opened. Needed to support viewFS-style wrapping and mounting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16492) Support HuaweiCloud Object Storage - as a file system in Hadoop

2020-06-09 Thread zhongjun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhongjun updated HADOOP-16492:
--
Attachment: OBSA HuaweiCloud OBS Adapter for Hadoop Support.pdf

> Support HuaweiCloud Object Storage - as a file system in Hadoop
> ---
>
> Key: HADOOP-16492
> URL: https://issues.apache.org/jira/browse/HADOOP-16492
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: zhongjun
>Priority: Major
> Attachments: HADOOP-16492.001.patch, HADOOP-16492.002.patch, 
> HADOOP-16492.003.patch, OBSA HuaweiCloud OBS Adapter for Hadoop Support.pdf, 
> huaweicloud-obs-integrate.pdf
>
>
> Added support for HuaweiCloud 
> OBS([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop, just 
> like what we do before for S3, ADL, OSS, etc. With simple configuration, 
> Hadoop applications can read/write data from OBS without any code change.
>  obs sdk link: https://github.com/huaweicloud/huaweicloud-sdk-java-obs
> obs API link: 
> https://support-intl.huaweicloud.com/en-us/api-obs/en-us_topic_0100846735.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org