[jira] [Commented] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-07-01 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17373270#comment-17373270
 ] 

Ayush Saxena commented on HADOOP-17787:
---

[~gautham]/[~elgoiri]

I see there are bunch of changes around the CI files by you folks, Any pointers 
is there anything which can break building of patches?

The Precommits doesn't seems to work

https://ci-hadoop.apache.org/view/Hadoop/job/PreCommit-YARN-Build/1085/

[https://ci-hadoop.apache.org/view/Hadoop/job/PreCommit-HDFS-Build/652/]

 

I have started trying to fix the hdfs one, no luck as of now, do let me know, 
if you guys are aware of anything of that sort.

 

 

> Refactor fetching of credentials in Jenkins
> ---
>
> Key: HADOOP-17787
> URL: https://issues.apache.org/jira/browse/HADOOP-17787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Need to refactor fetching of credentials in Jenkinsfile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17784) hadoop-aws landsat-pds test bucket will be deleted after Jul 1, 2021

2021-07-01 Thread Leona Yoda (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17373245#comment-17373245
 ] 

Leona Yoda edited comment on HADOOP-17784 at 7/2/21, 6:00 AM:
--

I checked Registry of Open Data on AWS([https://registry.opendata.aws/]), there 
are several datasets which format is csv.gz.

 
 * NOAA Global Historical Climatology Network Daily
[https://registry.opendata.aws/noaa-ghcn/]

{code:java}
// code placeholder
$ aws s3 ls noaa-ghcn-pds/csv.gz/ --no-sign-request --human-readable
2021-07-02 04:08:17 3.3 KiB 1763.csv.gz 
2021-07-02 04:08:27 3.2 KiB 1764.csv.gz 
... 
2021-07-02 04:09:04 143.1 MiB 2019.csv.gz 
2021-07-02 04:09:04 138.8 MiB  n 
2021-07-02 04:09:04 66.6 MiB 2021.csv.gz

$ filename="2020.csv.gz"
$ aws s3 cp s3://noaa-ghcn-pds/csv.gz/$filename /tmp --no-sign-request && cat 
/tmp/$filename | gzip -d | head
AE41196,20200101,TMIN,168,,,S,
AE41196,20200101,PRCP,0,D,,S,
AE41196,20200101,TAVG,211,H,,S,
...

$ wc -l /tmp/$filename
698966 /tmp/2020.csv.gz{code}
The datesets on these years seems enough size.

 * NOAA Integrated Surface Database
 [https://registry.opendata.aws/noaa-isd/]

{code:java}
// code placeholder
$ aws s3 ls s3://noaa-isd-pds/ --no-sign-request --human-readable
...
2021-07-02 09:57:30   12.1 MiB isd-inventory.csv.z
2020-07-04 09:24:18  428 Bytes isd-inventory.txt
2021-07-02 09:57:14   13.1 MiB isd-inventory.txt.z
...

$ filename="isd-inventory.csv.z"
$ aws s3 cp s3://noaa-isd-pds/$filename /tmp --no-sign-request && cat 
/tmp/$filename | gzip -d | head

"USAF","WBAN","YEAR","JAN","FEB","MAR","APR","MAY","JUN","JUL","AUG","SEP","OCT","NOV","DEC"
"007018","9","2011","0","0","2104","2797","2543","2614","382","0","0","0","0","0"
"007018","9","2013","0","0","0","0","0","0","710","0","0","0","0","0"
...

$ wc -l /tmp/$filename
44296 /tmp/isd-inventory.csv.z{code}
Under the subpath s3://noaa-isd-pds/data/, there are a lot of gzipped files but 
they are sepalated by space.
 * iNaturalist Licensed Observation Images
 [https://registry.opendata.aws/inaturalist-open-data/]

{code:java}
// code placeholder
aws s3 ls s3://inaturalist-open-data/ --no-sign-request --human-readable
   PRE metadata/
   PRE photos/
2021-05-20 15:59:081.8 GiB observations.csv.gz
2021-05-20 15:54:473.8 MiB observers.csv.gz
2021-05-20 16:02:143.1 GiB photos.csv.gz
2021-05-20 15:54:52   25.9 MiB taxa.csv.gz

$ filename="taxa.csv.gz"
$ aws s3 cp s3://inaturalist-open-data/$filename /tmp --no-sign-request && cat 
/tmp/$filename | gzip -d | head
taxon_id ancestry rank_level rank name active
3736 48460/1/2/355675/3/67566/3727/3735 10 
species Phimosus infuscatus true8742 48460/1/2/355675/3/7251/8659/8741 10 
species Snowornis cryptolophus true
...
$ wc -l /tmp/$filename
108058 /tmp/taxa.csv.gz

$ filename="observations.csv.gz"
$ aws s3 cp s3://inaturalist-open-data/$filename /tmp --no-sign-request && cat 
/tmp/$filename | gzip -d | head
observation_uuid observer_id latitude longitude positional_accuracy taxon_id 
quality_grade observed_on
7d59cfce-7602-4877-a027-80008481466f 354 38.0127535059 -122.5013941526 76553 
research 2011-09-03
b5d3c525-2bff-4ab4-ac4d-21c655d0a4d2 505 38.6113711142 -122.7838897705 52854 
research 2011-09-04
...
$ wc -l /tmp/$filename
8692639 /tmp/observations.csv.gz{code}
The files on top seems good, but they're sepalated by tab.

 

cf. LandSat-8
{code:java}
// code placeholder
aws s3 ls s3://landsat-pds/ --no-sign-request --human-readable
   PRE 4ac2fe6f-99c0-4940-81ea-2accba9370b9/
   PRE L8/
   PRE a96cb36b-1e0d-4245-854f-399ad968d6d3/
   PRE c1/
   PRE e6acf117-1cbf-4e88-af62-2098f464effe/
   PRE runs/
   PRE tarq/
   PRE tarq_corrupt/
   PRE test/
2017-05-17 22:42:27   23.2 KiB index.html
2016-08-20 02:12:04  105 Bytes robots.txt
2021-07-02 14:52:06   39 Bytes run_info.json
2021-07-02 14:02:063.2 KiB run_list.txt
2018-08-29 09:45:15   43.5 MiB scene_list.gz
$ filename="scene_list.gz"
$ aws s3 cp s3://landsat-pds/$filename /tmp --no-sign-request && cat 
/tmp/$filename | gzip -d | head
entityId,acquisitionDate,cloudCover,processingLevel,path,row,min_lat,min_lon,max_lat,max_lon,download_url
LC80101172015002LGN00,2015-01-02 
15:49:05.571384,80.81,L1GT,10,117,-79.09923,-139.66082,-77.7544,-125.09297,https://s3-us-west-2.amazonaws.com/landsat-pds/L8/010/117/LC80101172015002LGN00/index.html
LC80260392015002LGN00,2015-01-02 
16:56:51.399666,90.84,L1GT,26,39,29.23106,-97.48576,31.36421,-95.16029,https://s3-us-west-2.amazonaws.com/landsat-pds/L8/026/039/LC80260392015002LGN00/index.html
...


$ wc -l /tmp/$filename
183059 /tmp/scene_list.gz{code}
 

 


was (Author: yoda

[jira] [Commented] (HADOOP-17784) hadoop-aws landsat-pds test bucket will be deleted after Jul 1, 2021

2021-07-01 Thread Leona Yoda (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17373245#comment-17373245
 ] 

Leona Yoda commented on HADOOP-17784:
-

I checked Registry of Open Data on AWS(https://registry.opendata.aws/), there 
are several datasets which format is csv.gz.

 
 * NOAA Global Historical Climatology Network Daily
[https://registry.opendata.aws/noaa-ghcn/
]
{code:java}
// code placeholder
$ aws s3 ls noaa-ghcn-pds/csv.gz/ --no-sign-request --human-readable
2021-07-02 04:08:17 3.3 KiB 1763.csv.gz 
2021-07-02 04:08:27 3.2 KiB 1764.csv.gz 
... 
2021-07-02 04:09:04 143.1 MiB 2019.csv.gz 
2021-07-02 04:09:04 138.8 MiB  n 
2021-07-02 04:09:04 66.6 MiB 2021.csv.gz

$ filename="2020.csv.gz"
$ aws s3 cp s3://noaa-ghcn-pds/csv.gz/$filename /tmp --no-sign-request && cat 
/tmp/$filename | gzip -d | head
AE41196,20200101,TMIN,168,,,S,
AE41196,20200101,PRCP,0,D,,S,
AE41196,20200101,TAVG,211,H,,S,
...

$ wc -l /tmp/$filename
698966 /tmp/2020.csv.gz{code}
The datesets on these years seems enough size.
 * NOAA Integrated Surface Database
[https://registry.opendata.aws/noaa-isd/]

{code:java}
// code placeholder
$ aws s3 ls s3://noaa-isd-pds/ --no-sign-request --human-readable
...
2021-07-02 09:57:30   12.1 MiB isd-inventory.csv.z
2020-07-04 09:24:18  428 Bytes isd-inventory.txt
2021-07-02 09:57:14   13.1 MiB isd-inventory.txt.z
...

$ filename="isd-inventory.csv.z"
$ aws s3 cp s3://noaa-isd-pds/$filename /tmp --no-sign-request && cat 
/tmp/$filename | gzip -d | head

"USAF","WBAN","YEAR","JAN","FEB","MAR","APR","MAY","JUN","JUL","AUG","SEP","OCT","NOV","DEC"
"007018","9","2011","0","0","2104","2797","2543","2614","382","0","0","0","0","0"
"007018","9","2013","0","0","0","0","0","0","710","0","0","0","0","0"
...

$ wc -l /tmp/$filename
44296 /tmp/isd-inventory.csv.z{code}
Under the subpath s3://noaa-isd-pds/data/, there are a lot of gzipped files but 
they are sepalated by space.


 * iNaturalist Licensed Observation Images
[https://registry.opendata.aws/inaturalist-open-data/]

{code:java}
// code placeholder
aws s3 ls s3://inaturalist-open-data/ --no-sign-request --human-readable
   PRE metadata/
   PRE photos/
2021-05-20 15:59:081.8 GiB observations.csv.gz
2021-05-20 15:54:473.8 MiB observers.csv.gz
2021-05-20 16:02:143.1 GiB photos.csv.gz
2021-05-20 15:54:52   25.9 MiB taxa.csv.gz

$ filename="taxa.csv.gz"
$ aws s3 cp s3://inaturalist-open-data/$filename /tmp --no-sign-request && cat 
/tmp/$filename | gzip -d | head
taxon_id ancestry rank_level rank name active
3736 48460/1/2/355675/3/67566/3727/3735 10 
species Phimosus infuscatus true8742 48460/1/2/355675/3/7251/8659/8741 10 
species Snowornis cryptolophus true
...
$ wc -l /tmp/$filename
108058 /tmp/taxa.csv.gz

$ filename="observations.csv.gz"
$ aws s3 cp s3://inaturalist-open-data/$filename /tmp --no-sign-request && cat 
/tmp/$filename | gzip -d | head
observation_uuid observer_id latitude longitude positional_accuracy taxon_id 
quality_grade observed_on
7d59cfce-7602-4877-a027-80008481466f 354 38.0127535059 -122.5013941526 76553 
research 2011-09-03
b5d3c525-2bff-4ab4-ac4d-21c655d0a4d2 505 38.6113711142 -122.7838897705 52854 
research 2011-09-04
...
$ wc -l /tmp/$filename
8692639 /tmp/observations.csv.gz{code}
The files on top seems good, but they're sepalated by tab.

 

cf. LandSat-8
{code:java}
// code placeholder
aws s3 ls s3://landsat-pds/ --no-sign-request --human-readable
   PRE 4ac2fe6f-99c0-4940-81ea-2accba9370b9/
   PRE L8/
   PRE a96cb36b-1e0d-4245-854f-399ad968d6d3/
   PRE c1/
   PRE e6acf117-1cbf-4e88-af62-2098f464effe/
   PRE runs/
   PRE tarq/
   PRE tarq_corrupt/
   PRE test/
2017-05-17 22:42:27   23.2 KiB index.html
2016-08-20 02:12:04  105 Bytes robots.txt
2021-07-02 14:52:06   39 Bytes run_info.json
2021-07-02 14:02:063.2 KiB run_list.txt
2018-08-29 09:45:15   43.5 MiB scene_list.gz
$ filename="scene_list.gz"
$ aws s3 cp s3://landsat-pds/$filename /tmp --no-sign-request && cat 
/tmp/$filename | gzip -d | head
entityId,acquisitionDate,cloudCover,processingLevel,path,row,min_lat,min_lon,max_lat,max_lon,download_url
LC80101172015002LGN00,2015-01-02 
15:49:05.571384,80.81,L1GT,10,117,-79.09923,-139.66082,-77.7544,-125.09297,https://s3-us-west-2.amazonaws.com/landsat-pds/L8/010/117/LC80101172015002LGN00/index.html
LC80260392015002LGN00,2015-01-02 
16:56:51.399666,90.84,L1GT,26,39,29.23106,-97.48576,31.36421,-95.16029,https://s3-us-west-2.amazonaws.com/landsat-pds/L8/026/039/LC80260392015002LGN00/index.html
...


$ wc -l /tmp/$filename
183059 /tmp/scene_list.gz{code}




 

 

> hadoop-aws landsat-pds test bucket will be deleted after Jul 1, 

[jira] [Comment Edited] (HADOOP-17755) EOF reached error reading ORC file on S3A

2021-07-01 Thread Dongjoon Hyun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17373239#comment-17373239
 ] 

Dongjoon Hyun edited comment on HADOOP-17755 at 7/2/21, 5:39 AM:
-

Could you share the other Hadoop related configuration you used, [~arghya18]? 
I'm using the vanilla Apache Hadoop 3.3.1 with the following Hadoop-related 
configuration on EKS environment. For the other stuff, the default is used. For 
Spark, it's Spark 3.1.2.
{code}
-c spark.hadoop.fs.s3a.experimental.input.fadvise=random \
-c spark.hadoop.fs.s3a.downgrade.syncable.exceptions=true \
-c spark.kubernetes.driverEnv.AWS_REGION=us-west-2 \
-c spark.executorEnv.AWS_REGION=us-west-2 \
{code}


was (Author: dongjoon):
Could you share the other Hadoop related configuration you used, [~arghya18]? 
I'm using the vanilla Apache Hadoop 3.3.1 with the following Hadoop-related 
configuration on EKS environment. For the other stuff, the default is used.
{code}
-c spark.hadoop.fs.s3a.experimental.input.fadvise=random \
-c spark.hadoop.fs.s3a.downgrade.syncable.exceptions=true \
-c spark.kubernetes.driverEnv.AWS_REGION=us-west-2 \
-c spark.executorEnv.AWS_REGION=us-west-2 \
{code}

> EOF reached error reading ORC file on S3A
> -
>
> Key: HADOOP-17755
> URL: https://issues.apache.org/jira/browse/HADOOP-17755
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: Hadoop 3.2.0
>Reporter: Arghya Saha
>Priority: Major
>
> Hi I am trying to do some transformation using Spark 3.1.1-Hadoop 3.2 on K8s 
> and using s3a
> I have around 700 GB of data to read and around 200 executors (5 vCore and 
> 30G each).
> Its able to read most of the files in problematic stage (Scan orc => Filter 
> => Project) but is failing with few files at the end with below error.  The 
> size of the file mentioned in error is around 140 MB and all other files are 
> of similar size.
> I am able to read and rewrite the specific file mentioned which suggest the 
> file is not corrupted.
> Let me know if further information is required
>  
> {code:java}
> java.io.IOException: Error reading file: 
> s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orcjava.io.IOException:
>  Error reading file: 
> s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orc
>  at 
> org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1331) at 
> org.apache.orc.mapreduce.OrcMapreduceRecordReader.ensureBatch(OrcMapreduceRecordReader.java:78)
>  at 
> org.apache.orc.mapreduce.OrcMapreduceRecordReader.nextKeyValue(OrcMapreduceRecordReader.java:96)
>  at 
> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:37)
>  at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>  at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:511) at 
> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:177)
>  at 
> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
>  at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at 
> org.apache.spark.scheduler.Task.run(Task.scala:131) at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
>  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
> Source) at java.base/java.lang.Thread.run(Unknown Source)Caused by: 
> java.io.EOFException: End of file reached before reading fully. at 
> org.apache.hadoop.fs.s3a.S3AInputStream.readFully(S3AInputStream.java:702) at 
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111) 
> at 
> org.apache.orc.impl.RecordReaderUtils.readDiskRanges(RecordReaderUtils.java:566)
>  at 
> org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readFileData(RecordReaderUtils.java:285)
>  at 
> org.apache.orc.impl.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:1237)
>  at 
> org.apache.orc.impl.RecordReaderImpl.readStripe(RecordReaderImpl.java:1105) 
> at 
> org.apache.orc.impl.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:1256)
>  at 
> org.apache.or

[jira] [Commented] (HADOOP-17755) EOF reached error reading ORC file on S3A

2021-07-01 Thread Dongjoon Hyun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17373239#comment-17373239
 ] 

Dongjoon Hyun commented on HADOOP-17755:


Could you share the other Hadoop related configuration you used, [~arghya18]? 
I'm using the vanilla Apache Hadoop 3.3.1 with the following Hadoop-related 
configuration on EKS environment. For the other stuff, the default is used.
{code}
-c spark.hadoop.fs.s3a.experimental.input.fadvise=random \
-c spark.hadoop.fs.s3a.downgrade.syncable.exceptions=true \
-c spark.kubernetes.driverEnv.AWS_REGION=us-west-2 \
-c spark.executorEnv.AWS_REGION=us-west-2 \
{code}

> EOF reached error reading ORC file on S3A
> -
>
> Key: HADOOP-17755
> URL: https://issues.apache.org/jira/browse/HADOOP-17755
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: Hadoop 3.2.0
>Reporter: Arghya Saha
>Priority: Major
>
> Hi I am trying to do some transformation using Spark 3.1.1-Hadoop 3.2 on K8s 
> and using s3a
> I have around 700 GB of data to read and around 200 executors (5 vCore and 
> 30G each).
> Its able to read most of the files in problematic stage (Scan orc => Filter 
> => Project) but is failing with few files at the end with below error.  The 
> size of the file mentioned in error is around 140 MB and all other files are 
> of similar size.
> I am able to read and rewrite the specific file mentioned which suggest the 
> file is not corrupted.
> Let me know if further information is required
>  
> {code:java}
> java.io.IOException: Error reading file: 
> s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orcjava.io.IOException:
>  Error reading file: 
> s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orc
>  at 
> org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1331) at 
> org.apache.orc.mapreduce.OrcMapreduceRecordReader.ensureBatch(OrcMapreduceRecordReader.java:78)
>  at 
> org.apache.orc.mapreduce.OrcMapreduceRecordReader.nextKeyValue(OrcMapreduceRecordReader.java:96)
>  at 
> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:37)
>  at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>  at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:511) at 
> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:177)
>  at 
> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
>  at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at 
> org.apache.spark.scheduler.Task.run(Task.scala:131) at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
>  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
> Source) at java.base/java.lang.Thread.run(Unknown Source)Caused by: 
> java.io.EOFException: End of file reached before reading fully. at 
> org.apache.hadoop.fs.s3a.S3AInputStream.readFully(S3AInputStream.java:702) at 
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111) 
> at 
> org.apache.orc.impl.RecordReaderUtils.readDiskRanges(RecordReaderUtils.java:566)
>  at 
> org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readFileData(RecordReaderUtils.java:285)
>  at 
> org.apache.orc.impl.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:1237)
>  at 
> org.apache.orc.impl.RecordReaderImpl.readStripe(RecordReaderImpl.java:1105) 
> at 
> org.apache.orc.impl.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:1256)
>  at 
> org.apache.orc.impl.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1291)
>  at 
> org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1327) 
> ... 20 more
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #3168: HDFS-16106. Fix flaky unit test TestDFSShell

2021-07-01 Thread GitBox


tomscut commented on pull request #3168:
URL: https://github.com/apache/hadoop/pull/3168#issuecomment-872713688


   Thanks @ayushtkn for your review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17765) ABFS: Use Unique File Paths in Tests

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17765?focusedWorklogId=617973&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617973
 ]

ASF GitHub Bot logged work on HADOOP-17765:
---

Author: ASF GitHub Bot
Created on: 02/Jul/21 04:33
Start Date: 02/Jul/21 04:33
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #3153:
URL: https://github.com/apache/hadoop/pull/3153#discussion_r662729508



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
##
@@ -25,6 +25,8 @@
 import java.util.UUID;
 import java.util.concurrent.Callable;
 
+import org.apache.commons.lang3.StringUtils;

Review comment:
   import order




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 617973)
Time Spent: 1h 40m  (was: 1.5h)

> ABFS: Use Unique File Paths in Tests
> 
>
> Key: HADOOP-17765
> URL: https://issues.apache.org/jira/browse/HADOOP-17765
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Many of ABFS driver tests use common names for file paths (e.g., 
> "/testfile"). This poses a risk of errors during parallel test runs when 
> static variables (such as those for monitoring stats) affected by file paths 
> are introduced.
> Using unique test file names will avoid possible errors arising from shared 
> resources during parallel runs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #3153: HADOOP-17765. ABFS: Use Unique File Paths in Tests

2021-07-01 Thread GitBox


bilaharith commented on a change in pull request #3153:
URL: https://github.com/apache/hadoop/pull/3153#discussion_r662729508



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
##
@@ -25,6 +25,8 @@
 import java.util.UUID;
 import java.util.concurrent.Callable;
 
+import org.apache.commons.lang3.StringUtils;

Review comment:
   import order




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17755) EOF reached error reading ORC file on S3A

2021-07-01 Thread Arghya Saha (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17373213#comment-17373213
 ] 

Arghya Saha edited comment on HADOOP-17755 at 7/2/21, 4:17 AM:
---

[~ste...@apache.org] Apologies for the delay, I was struggling to build spark. 
I have actually built spark 3.1.1 with hadoop 3.3.1 and ran the same job. Good 
news is we do not have the error. However I have noticed an issue which is 
concerning.

The input data reported by Spark(Hadoop 3.3.1) was almost double and read 
runtime also increased (around 20%) compared to Spark(Hadoop 3.2.0) with same 
exact amount of resource and same configuration. And this is happening with 
other jobs as well which was not impacted by read fully error as stated above.

*I was having the same exact issue when I was using the workaround  
fs.s3a.readahead.range = 1G with Hadoop 3.2.0*

Below is further details :

 
|Hadoop Version|Actual size of the files(in SQL Tab)|Reported size of the 
file(In Stages)|Time to complete the Stage|fs.s3a.readahead.range|
|Hadoop 3.2.0|29.3 GiB|29.3 GiB|23 min|64K|
|Hadoop 3.3.1|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}27 
min{color}*|{color:#172b4d}64K{color}|
|Hadoop 3.2.0|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}~27 
min{color}*|{color:#172b4d}1G{color}|

* *Shuffle Write* is same (95.9 GiB) for all the above three cases

I was expecting some improvement(or same as 3.2.0) with Hadoop 3.3.1 with read 
operations, please suggest how to approach this and resolve this.

 

 


was (Author: arghya18):
[~ste...@apache.org] Apologies for the delay, I was struggling to build spark. 
I have actually built spark 3.1.1 with hadoop 3.3.1 and ran the same job. Good 
news is we do not have the error. However I have noticed an issue which is 
concerning.

The input data reported by Spark(Hadoop 3.3.1) was almost double and read 
runtime also increased (around 20%) compared to Spark(Hadoop 3.2.0) with same 
exact amount of resource and same configuration. And this is happening with 
other jobs as well which was not impacted by read fully error as stated above.

*I was having the same exact issue when I was using the workaround  
fs.s3a.readahead.range = 1G with Hadoop 3.2.0*

Below is further details :

 
|Hadoop Version|Actual size of the files(in SQL Tab)|Reported size of the 
file(In Stages)|Time to complete the Stage|fs.s3a.readahead.range|
|Hadoop 3.2.0|29.3 GiB|29.3 GiB|23 min|64K|
|Hadoop 3.3.1|29.3 GiB|*{color:#FF}58.7 GiB{color}*|*{color:#FF}27 
min{color}*|{color:#172b4d}64K{color}|
|Hadoop 3.2.0|29.3 GiB|*{color:#FF}58.7 GiB{color}*|*{color:#FF}~27 
min{color}*|{color:#172b4d}1G{color}|

I was expecting some improvement(or same as 3.2.0) with Hadoop 3.3.1 with read 
operations, please suggest how to approach this and resolve this.

 

 

> EOF reached error reading ORC file on S3A
> -
>
> Key: HADOOP-17755
> URL: https://issues.apache.org/jira/browse/HADOOP-17755
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: Hadoop 3.2.0
>Reporter: Arghya Saha
>Priority: Major
>
> Hi I am trying to do some transformation using Spark 3.1.1-Hadoop 3.2 on K8s 
> and using s3a
> I have around 700 GB of data to read and around 200 executors (5 vCore and 
> 30G each).
> Its able to read most of the files in problematic stage (Scan orc => Filter 
> => Project) but is failing with few files at the end with below error.  The 
> size of the file mentioned in error is around 140 MB and all other files are 
> of similar size.
> I am able to read and rewrite the specific file mentioned which suggest the 
> file is not corrupted.
> Let me know if further information is required
>  
> {code:java}
> java.io.IOException: Error reading file: 
> s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orcjava.io.IOException:
>  Error reading file: 
> s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orc
>  at 
> org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1331) at 
> org.apache.orc.mapreduce.OrcMapreduceRecordReader.ensureBatch(OrcMapreduceRecordReader.java:78)
>  at 
> org.apache.orc.mapreduce.OrcMapreduceRecordReader.nextKeyValue(OrcMapreduceRecordReader.java:96)
>  at 
> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:37)
>  at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>  at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:511) at 
> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458

[jira] [Commented] (HADOOP-17755) EOF reached error reading ORC file on S3A

2021-07-01 Thread Arghya Saha (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17373213#comment-17373213
 ] 

Arghya Saha commented on HADOOP-17755:
--

[~ste...@apache.org] Apologies for the delay, I was struggling to build spark. 
I have actually built spark 3.1.1 with hadoop 3.3.1 and ran the same job. Good 
news is we do not have the error. However I have noticed an issue which is 
concerning.

The input data reported by Spark(Hadoop 3.3.1) was almost double and read 
runtime also increased (around 20%) compared to Spark(Hadoop 3.2.0) with same 
exact amount of resource and same configuration. And this is happening with 
other jobs as well which was not impacted by read fully error as stated above.

*I was having the same exact issue when I was using the workaround  
fs.s3a.readahead.range = 1G with Hadoop 3.2.0*

Below is further details :

 
|Hadoop Version|Actual size of the files(in SQL Tab)|Reported size of the 
file(In Stages)|Time to complete the Stage|fs.s3a.readahead.range|
|Hadoop 3.2.0|29.3 GiB|29.3 GiB|23 min|64K|
|Hadoop 3.3.1|29.3 GiB|*{color:#FF}58.7 GiB{color}*|*{color:#FF}27 
min{color}*|{color:#172b4d}64K{color}|
|Hadoop 3.2.0|29.3 GiB|*{color:#FF}58.7 GiB{color}*|*{color:#FF}~27 
min{color}*|{color:#172b4d}1G{color}|

I was expecting some improvement(or same as 3.2.0) with Hadoop 3.3.1 with read 
operations, please suggest how to approach this and resolve this.

 

 

> EOF reached error reading ORC file on S3A
> -
>
> Key: HADOOP-17755
> URL: https://issues.apache.org/jira/browse/HADOOP-17755
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0
> Environment: Hadoop 3.2.0
>Reporter: Arghya Saha
>Priority: Major
>
> Hi I am trying to do some transformation using Spark 3.1.1-Hadoop 3.2 on K8s 
> and using s3a
> I have around 700 GB of data to read and around 200 executors (5 vCore and 
> 30G each).
> Its able to read most of the files in problematic stage (Scan orc => Filter 
> => Project) but is failing with few files at the end with below error.  The 
> size of the file mentioned in error is around 140 MB and all other files are 
> of similar size.
> I am able to read and rewrite the specific file mentioned which suggest the 
> file is not corrupted.
> Let me know if further information is required
>  
> {code:java}
> java.io.IOException: Error reading file: 
> s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orcjava.io.IOException:
>  Error reading file: 
> s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orc
>  at 
> org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1331) at 
> org.apache.orc.mapreduce.OrcMapreduceRecordReader.ensureBatch(OrcMapreduceRecordReader.java:78)
>  at 
> org.apache.orc.mapreduce.OrcMapreduceRecordReader.nextKeyValue(OrcMapreduceRecordReader.java:96)
>  at 
> org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:37)
>  at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
>  at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:511) at 
> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at 
> org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:177)
>  at 
> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
>  at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at 
> org.apache.spark.scheduler.Task.run(Task.scala:131) at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
>  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
> Source) at java.base/java.lang.Thread.run(Unknown Source)Caused by: 
> java.io.EOFException: End of file reached before reading fully. at 
> org.apache.hadoop.fs.s3a.S3AInputStream.readFully(S3AInputStream.java:702) at 
> org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111) 
> at 
> org.apache.orc.impl.RecordReaderUtils.readDiskRanges(RecordReaderUtils.java:566)
>  at 
> org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readFileData(RecordReaderUtils.java:285)
>  at 
> org.apache.orc.impl.RecordReaderImpl.rea

[GitHub] [hadoop] tomscut commented on pull request #3168: HDFS-16106. Fix flaky unit test TestDFSShell

2021-07-01 Thread GitBox


tomscut commented on pull request #3168:
URL: https://github.com/apache/hadoop/pull/3168#issuecomment-872657263


   > Thank you @tomscut
   
   Thanks @aajisaka for your review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut edited a comment on pull request #3168: HDFS-16106. Fix flaky unit test TestDFSShell

2021-07-01 Thread GitBox


tomscut edited a comment on pull request #3168:
URL: https://github.com/apache/hadoop/pull/3168#issuecomment-872649177


   Hi @aajisaka @tasanuma @jojochuang, could you please help to look at this. 
Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #3168: HDFS-16106. Fix flaky unit test TestDFSShell

2021-07-01 Thread GitBox


tomscut commented on pull request #3168:
URL: https://github.com/apache/hadoop/pull/3168#issuecomment-872649177


   Hi @aajisaka @tamaashu @jojochuang, could you please help to look at this. 
Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut opened a new pull request #3168: HDFS-16106. Fix flaky unit test TestDFSShell

2021-07-01 Thread GitBox


tomscut opened a new pull request #3168:
URL: https://github.com/apache/hadoop/pull/3168


   JIRA: [HDFS-16106](https://issues.apache.org/jira/browse/HDFS-16106)
   
   This unit test occasionally fails.
   
   The value set for dfs.namenode.accesstime.precision is too low, result in 
the execution of the method, accesstime could be set many times, eventually 
leading to failed assert.
   
   IMO, dfs.namenode.accesstime.precision should be greater than or equal to 
the timeout(120s) of TestDFSShell#testCopyCommandsWithPreserveOption(), or 
directly set to 0 to disable this feature.
   
   ```[ERROR] Tests run: 52, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 
106.778 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestDFSShell
   [ERROR] 
testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell)  Time 
elapsed: 2.353 s  <<< FAILURE!
   java.lang.AssertionError: expected:<1625095098319> but was:<1625095099374>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:633)
at 
org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2282)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   
   [ERROR] 
testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell)  Time 
elapsed: 2.467 s  <<< FAILURE!
   java.lang.AssertionError: expected:<1625095192527> but was:<1625095193950>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:633)
at 
org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2323)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   
   [ERROR] 
testCopyCommandsWithPreserveOption(org.apache.hadoop.hdfs.TestDFSShell)  Time 
elapsed: 2.173 s  <<< FAILURE!
   java.lang.AssertionError: expected:<1625095196756> but was:<1625095197975>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:633)
at 
org.apache.hadoop.hdfs.TestDFSShell.testCopyCommandsWithPreserveOption(TestDFSShell.java:2303)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosi

[jira] [Work logged] (HADOOP-17028) ViewFS should initialize target filesystems lazily

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17028?focusedWorklogId=617937&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617937
 ]

ASF GitHub Bot logged work on HADOOP-17028:
---

Author: ASF GitHub Bot
Created on: 02/Jul/21 01:12
Start Date: 02/Jul/21 01:12
Worklog Time Spent: 10m 
  Work Description: shvachko edited a comment on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-872643068


   Looks good generally. I checked the tests are catching previous errors.
   
   - There seems to be one last checkstyle warning.
   - Also I see you use `import static org.junit.Assert.*;` and 
`viewfs.Constants.*;` We should import only what is actually needed. Avoid 
*-imports.
   - And I found an empty line added in `ViewFileSystem`. Don't know why 
checkstyle didn't catch it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 617937)
Time Spent: 6h 20m  (was: 6h 10m)

> ViewFS should initialize target filesystems lazily
> --
>
> Key: HADOOP-17028
> URL: https://issues.apache.org/jira/browse/HADOOP-17028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: client-mounts, fs, viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Currently viewFS initialize all configured target filesystems when 
> viewfs#init itself.
> Some target file system initialization involve creating heavy objects and 
> proxy connections. Ex: DistributedFileSystem#initialize will create DFSClient 
> object which will create proxy connections to NN etc.
> For example: if ViewFS configured with 10 target fs with hdfs uri and 2 
> targets with s3a.
> If one of the client only work with s3a target, But ViewFS will initialize 
> all targets irrespective of what clients interested to work with. That means, 
> here client will create 10 DFS initializations and 2 s3a initializations. Its 
> unnecessary to have DFS initialization here. So, it will be a good idea to 
> initialize the target fs only when first time usage call come to particular 
> target fs scheme. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shvachko edited a comment on pull request #2260: HADOOP-17028. ViewFS should initialize mounted target filesystems lazily

2021-07-01 Thread GitBox


shvachko edited a comment on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-872643068


   Looks good generally. I checked the tests are catching previous errors.
   
   - There seems to be one last checkstyle warning.
   - Also I see you use `import static org.junit.Assert.*;` and 
`viewfs.Constants.*;` We should import only what is actually needed. Avoid 
*-imports.
   - And I found an empty line added in `ViewFileSystem`. Don't know why 
checkstyle didn't catch it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17028) ViewFS should initialize target filesystems lazily

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17028?focusedWorklogId=617936&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617936
 ]

ASF GitHub Bot logged work on HADOOP-17028:
---

Author: ASF GitHub Bot
Created on: 02/Jul/21 01:11
Start Date: 02/Jul/21 01:11
Worklog Time Spent: 10m 
  Work Description: shvachko commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-872643068


   Looks good generally. I checked the tests are catching previous errors.
   
   - There seems to be one last checkstyle warning.
   - Also I see you use {{import static org.junit.Assert.*;}} and 
{{viewfs.Constants.*;}} We should import only what is actually needed. Avoid 
*-imports.
   - And I found an empty line added in {{ViewFileSystem}}. Don't know why 
checkstyle didn't catch it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 617936)
Time Spent: 6h 10m  (was: 6h)

> ViewFS should initialize target filesystems lazily
> --
>
> Key: HADOOP-17028
> URL: https://issues.apache.org/jira/browse/HADOOP-17028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: client-mounts, fs, viewfs
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> Currently viewFS initialize all configured target filesystems when 
> viewfs#init itself.
> Some target file system initialization involve creating heavy objects and 
> proxy connections. Ex: DistributedFileSystem#initialize will create DFSClient 
> object which will create proxy connections to NN etc.
> For example: if ViewFS configured with 10 target fs with hdfs uri and 2 
> targets with s3a.
> If one of the client only work with s3a target, But ViewFS will initialize 
> all targets irrespective of what clients interested to work with. That means, 
> here client will create 10 DFS initializations and 2 s3a initializations. Its 
> unnecessary to have DFS initialization here. So, it will be a good idea to 
> initialize the target fs only when first time usage call come to particular 
> target fs scheme. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shvachko commented on pull request #2260: HADOOP-17028. ViewFS should initialize mounted target filesystems lazily

2021-07-01 Thread GitBox


shvachko commented on pull request #2260:
URL: https://github.com/apache/hadoop/pull/2260#issuecomment-872643068


   Looks good generally. I checked the tests are catching previous errors.
   
   - There seems to be one last checkstyle warning.
   - Also I see you use {{import static org.junit.Assert.*;}} and 
{{viewfs.Constants.*;}} We should import only what is actually needed. Avoid 
*-imports.
   - And I found an empty line added in {{ViewFileSystem}}. Don't know why 
checkstyle didn't catch it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui closed pull request #3161: HDFS-16105. Edit log corruption due to mismatch between fileId and path

2021-07-01 Thread GitBox


ferhui closed pull request #3161:
URL: https://github.com/apache/hadoop/pull/3161


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #3140: HDFS-16088. Standby NameNode process getLiveDatanodeStorageReport req…

2021-07-01 Thread GitBox


tomscut commented on pull request #3140:
URL: https://github.com/apache/hadoop/pull/3140#issuecomment-872639218


   Failed junit tests | hadoop.hdfs.TestRollingUpgrade
   
   Hi @Hexiaoqiao , This failed unit test work fine locally. And I added a 
separate unit test. Could you please take a quick look? Thanks.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17787?focusedWorklogId=617736&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617736
 ]

ASF GitHub Bot logged work on HADOOP-17787:
---

Author: ASF GitHub Bot
Created on: 01/Jul/21 18:25
Start Date: 01/Jul/21 18:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3167:
URL: https://github.com/apache/hadoop/pull/3167#issuecomment-872458993


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   2m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  12m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  cc  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  cc  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  12m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  32m  1s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  92m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3167 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs compile 
cc mvnsite javac unit golang |
   | uname | Linux 4a75c307369f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a042657b94031d34c65892a1d41190f61c1adcf8 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/testReport/ |
   | Max. process+thread count | 693 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastru

[GitHub] [hadoop] hadoop-yetus commented on pull request #3167: HADOOP-17787. Refactor fetching of credentials in Jenkins

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #3167:
URL: https://github.com/apache/hadoop/pull/3167#issuecomment-872458993


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   2m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  12m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  cc  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  cc  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  12m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  32m  1s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  92m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3167 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs compile 
cc mvnsite javac unit golang |
   | uname | Linux 4a75c307369f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a042657b94031d34c65892a1d41190f61c1adcf8 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/testReport/ |
   | Max. process+thread count | 693 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3140: HDFS-16088. Standby NameNode process getLiveDatanodeStorageReport req…

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #3140:
URL: https://github.com/apache/hadoop/pull/3140#issuecomment-872416289


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 243m 35s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3140/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 336m 38s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3140/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3140 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 29cf82750999 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 218fafc781df4e88022a6077eb3da5bc8c876f5d |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3140/2/testReport/ |
   | Max. process+thread count | 3606 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3140/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on 

[jira] [Work logged] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17787?focusedWorklogId=617699&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617699
 ]

ASF GitHub Bot logged work on HADOOP-17787:
---

Author: ASF GitHub Bot
Created on: 01/Jul/21 16:53
Start Date: 01/Jul/21 16:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3167:
URL: https://github.com/apache/hadoop/pull/3167#issuecomment-872402574


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  20m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  48m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  31m 56s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 121m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3167 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs compile 
cc mvnsite javac unit golang |
   | uname | Linux c9fd4da289a6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a042657b94031d34c65892a1d41190f61c1adcf8 |
   | Default Java | Red Hat, Inc.-1.8.0_292-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/testReport/ |
   | Max. process+thread count | 543 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/console |
   | versions | git=2.27.0 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 617699)
Time Spent: 20m  (was: 10m)

> Refactor fetching of credentials in Jenkins
> ---
>
> Key: HADOOP-17787
> URL: https://issues.apache.org/jira/browse/HADOOP-17787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Need to refactor fetching of credentials in Jenkinsfile.



--
This message was sent by Atlassian Jira

[GitHub] [hadoop] hadoop-yetus commented on pull request #3167: HADOOP-17787. Refactor fetching of credentials in Jenkins

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #3167:
URL: https://github.com/apache/hadoop/pull/3167#issuecomment-872402574


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  20m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  48m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  31m 56s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 121m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3167 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs compile 
cc mvnsite javac unit golang |
   | uname | Linux c9fd4da289a6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a042657b94031d34c65892a1d41190f61c1adcf8 |
   | Default Java | Red Hat, Inc.-1.8.0_292-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/testReport/ |
   | Max. process+thread count | 543 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3167/1/console |
   | versions | git=2.27.0 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17773) Avoid using zookeeper deprecated API and classes

2021-07-01 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372924#comment-17372924
 ] 

Steve Loughran commented on HADOOP-17773:
-

was this meant to be a new branch in apache/ ?

> Avoid using zookeeper deprecated API and classes
> 
>
> Key: HADOOP-17773
> URL: https://issues.apache.org/jira/browse/HADOOP-17773
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In latest version of zookeeper some internal classes are removed which is 
> used in hadoop test code, for example ServerCnxnFactoryAccessor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17784) hadoop-aws landsat-pds test bucket will be deleted after Jul 1, 2021

2021-07-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17784:

Summary: hadoop-aws landsat-pds test bucket will be deleted after Jul 1, 
2021  (was: hadoop-aws landsat-pds test bucket will be deleted on Jul 1, 2021)

> hadoop-aws landsat-pds test bucket will be deleted after Jul 1, 2021
> 
>
> Key: HADOOP-17784
> URL: https://issues.apache.org/jira/browse/HADOOP-17784
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3, test
>Reporter: Leona Yoda
>Priority: Major
>
> I found an anouncement that landsat-pds buket will be deleted on July 1, 2021
> (https://registry.opendata.aws/landsat-8/)
> and  I think this bucket  is used in th test of hadoop-aws module use
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java#L93]
>  
> At this time I can access the bucket but we might have to change the test 
> bucket in someday.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #3159: HDFS-16099. Make bpServiceToActive to be volatile.

2021-07-01 Thread GitBox


Hexiaoqiao commented on pull request #3159:
URL: https://github.com/apache/hadoop/pull/3159#issuecomment-872358780


   Committed to trunk.
   Thanks @zhangshuyan0 for your contribution! Thanks @jojochuang and @ayushtkn 
for your reviews!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao merged pull request #3159: HDFS-16099. Make bpServiceToActive to be volatile.

2021-07-01 Thread GitBox


Hexiaoqiao merged pull request #3159:
URL: https://github.com/apache/hadoop/pull/3159


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17784) hadoop-aws landsat-pds test bucket will be deleted on Jul 1, 2021

2021-07-01 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372895#comment-17372895
 ] 

Steve Loughran commented on HADOOP-17784:
-

Apparently it'll be moving to requester pays to give a warning that it'll go:

https://lists.osgeo.org/pipermail/landsat-pds/2021-June/000181.html

What do we want from a new file?

* AWS fund the data reads, so keeping costs down, especially for open source 
developers without someone paying their bills.
* eliminates the overhead of creating a multi MB dataset on test runs
* CSV.GZ so we can verify the code can read .csv.gz data created through other 
applications
* supports S3 select
* world/anonymous readable
* read-only so we can do permission and credential tests




> hadoop-aws landsat-pds test bucket will be deleted on Jul 1, 2021
> -
>
> Key: HADOOP-17784
> URL: https://issues.apache.org/jira/browse/HADOOP-17784
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3, test
>Reporter: Leona Yoda
>Priority: Major
>
> I found an anouncement that landsat-pds buket will be deleted on July 1, 2021
> (https://registry.opendata.aws/landsat-8/)
> and  I think this bucket  is used in th test of hadoop-aws module use
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java#L93]
>  
> At this time I can access the bucket but we might have to change the test 
> bucket in someday.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17783) Old JQuery version causing security concerns

2021-07-01 Thread Ahmed Abdelrahman (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Abdelrahman updated HADOOP-17783:
---
Description: 
These fixes are required for Hadoop 3.3.0. 

Can you please update the following jqueries for the UI, they are causing 
security and vulnerabilities concerns: 

URL : http://web-address:8088/static/jquery/jquery-3.4.1.min.js Installed 
version : 3.4.1 Fixed version : 3.5.0 or latest

URL : http://web-address:8080/static/jquery-1.12.4.min.js  Installed version : 
1.12.4 Fixed version : 3.5.0 or latest

These also extend to Spark-on-Yarn cluster. I hope I'm not messing up with my 
files paths!

 

Thank you

  was:
These fixes are required for Hadoop 3.3.0. 

Can you please update the following jqueries for the UI, they are causing 
security and vulnerabilities concerns: 

URL : 
[http://some-address:8088/static/jquery/jquery-3.4.1.min.js|http://casmvlpe1aai01.phx.aexp.com:8088/static/jquery/jquery-3.4.1.min.js]
 Installed version : 3.4.1 Fixed version : 3.5.0 or latest

URL : 
[http://some-address:8080/static/jquery-1.12.4.min.js|http://casmvlpe1aai01.phx.aexp.com:8080/static/jquery-1.12.4.min.js]
 Installed version : 1.12.4 Fixed version : 3.5.0 or latest

These also extend to Spark-on-Yarn cluster. I hope I'm not messing up with my 
files paths!

 

Thank you


> Old JQuery version causing security concerns
> 
>
> Key: HADOOP-17783
> URL: https://issues.apache.org/jira/browse/HADOOP-17783
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.3.0
> Environment: Redhat Spark-Hadoop cluster.
>Reporter: Ahmed Abdelrahman
>Priority: Blocker
>
> These fixes are required for Hadoop 3.3.0. 
> Can you please update the following jqueries for the UI, they are causing 
> security and vulnerabilities concerns: 
> URL : http://web-address:8088/static/jquery/jquery-3.4.1.min.js Installed 
> version : 3.4.1 Fixed version : 3.5.0 or latest
> URL : http://web-address:8080/static/jquery-1.12.4.min.js  Installed version 
> : 1.12.4 Fixed version : 3.5.0 or latest
> These also extend to Spark-on-Yarn cluster. I hope I'm not messing up with my 
> files paths!
>  
> Thank you



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17787?focusedWorklogId=617634&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617634
 ]

ASF GitHub Bot logged work on HADOOP-17787:
---

Author: ASF GitHub Bot
Created on: 01/Jul/21 14:50
Start Date: 01/Jul/21 14:50
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra opened a new pull request #3167:
URL: https://github.com/apache/hadoop/pull/3167


   * Moved fetching of username and password
  to a function.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 617634)
Remaining Estimate: 0h
Time Spent: 10m

> Refactor fetching of credentials in Jenkins
> ---
>
> Key: HADOOP-17787
> URL: https://issues.apache.org/jira/browse/HADOOP-17787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Need to refactor fetching of credentials in Jenkinsfile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17787:

Labels: pull-request-available  (was: )

> Refactor fetching of credentials in Jenkins
> ---
>
> Key: HADOOP-17787
> URL: https://issues.apache.org/jira/browse/HADOOP-17787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Need to refactor fetching of credentials in Jenkinsfile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra opened a new pull request #3167: HADOOP-17787. Refactor fetching of credentials in Jenkins

2021-07-01 Thread GitBox


GauthamBanasandra opened a new pull request #3167:
URL: https://github.com/apache/hadoop/pull/3167


   * Moved fetching of username and password
  to a function.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #3160: HDFS-16104. Remove unused parameter and fix java doc for DiskBalancerCLI

2021-07-01 Thread GitBox


tomscut commented on pull request #3160:
URL: https://github.com/apache/hadoop/pull/3160#issuecomment-872309002


   > Thanx @tomscut for the contribution!!!
   
   Thanks @ayushtkn for the merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-07-01 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HADOOP-17787:
---

 Summary: Refactor fetching of credentials in Jenkins
 Key: HADOOP-17787
 URL: https://issues.apache.org/jira/browse/HADOOP-17787
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Need to refactor fetching of credentials in Jenkinsfile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17779) Lock File System Creator Semaphore Uninterruptibly

2021-07-01 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372796#comment-17372796
 ] 

David Mollitor commented on HADOOP-17779:
-

[~ste...@apache.org] Thanks for the feedback.

I ran into this issue, as reported, here:

https://issues.apache.org/jira/browse/HIVE-24484?focusedCommentId=17371600&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17371600

So ya, we had to put in some hackiness to workaround this change as previously 
relying on the standard {{InterruptedException}} was enough, now there needs to 
handle {{InterruptedIOException}} as well, which I would argue is a breaking 
change.

Anything related to networking, etc, should have a timeout to kill the 
connection, not to mention, the scenario you described is currently the case 
now.  If a thread has the semaphore and is stuck on a Socket connection, 
interrupting that thread does not kill the connection.  The thread will only 
unblock from the socket operation if the socket is closed, the socket is not 
connected, or the socket input has been shutdown using {{shutdownInput()}}.

> Lock File System Creator Semaphore Uninterruptibly
> --
>
> Key: HADOOP-17779
> URL: https://issues.apache.org/jira/browse/HADOOP-17779
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.1
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{FileSystem}} "creator permits" are acquired in an interruptable way.  This 
> changed the behavior of the call because previous callers were handling the 
> IOException as a critical error.  An interrupt was handled in the typical 
> {{InterruptedException}} way.  Lastly, there was no documentation of this new 
> event so again, callers are not prepared.
> Restore the previous behavior and lock the semaphore Uninterruptibly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17779) Lock File System Creator Semaphore Uninterruptibly

2021-07-01 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-17779:

Affects Version/s: 3.3.1

> Lock File System Creator Semaphore Uninterruptibly
> --
>
> Key: HADOOP-17779
> URL: https://issues.apache.org/jira/browse/HADOOP-17779
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.1
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{FileSystem}} "creator permits" are acquired in an interruptable way.  This 
> changed the behavior of the call because previous callers were handling the 
> IOException as a critical error.  An interrupt was handled in the typical 
> {{InterruptedException}} way.  Lastly, there was no documentation of this new 
> event so again, callers are not prepared.
> Restore the previous behavior and lock the semaphore Uninterruptibly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17779) Lock File System Creator Semaphore Uninterruptibly

2021-07-01 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-17779:

Component/s: fs

> Lock File System Creator Semaphore Uninterruptibly
> --
>
> Key: HADOOP-17779
> URL: https://issues.apache.org/jira/browse/HADOOP-17779
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{FileSystem}} "creator permits" are acquired in an interruptable way.  This 
> changed the behavior of the call because previous callers were handling the 
> IOException as a critical error.  An interrupt was handled in the typical 
> {{InterruptedException}} way.  Lastly, there was no documentation of this new 
> event so again, callers are not prepared.
> Restore the previous behavior and lock the semaphore Uninterruptibly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3161: HDFS-16105. Edit log corruption due to mismatch between fileId and path

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #3161:
URL: https://github.com/apache/hadoop/pull/3161#issuecomment-872246146


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 356m 16s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3161/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 450m 35s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestINodeFile |
   |   | hadoop.hdfs.TestFileCreation |
   |   | hadoop.hdfs.server.namenode.TestDeleteRace |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractAppend |
   |   | hadoop.hdfs.TestRenameWhileOpen |
   |   | hadoop.hdfs.TestLease |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.TestNameNodeXAttr |
   |   | hadoop.hdfs.TestReservedRawPaths |
   |   | hadoop.hdfs.server.namenode.TestHDFSConcat |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.namenode.TestFileContextXAttr |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.TestEncryptionZones |
   |   | hadoop.hdfs.TestEncryptionZonesWithKMS |
   |   | hadoop.hdfs.web.TestWebHDFSXAttr |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestFileAppend3 |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3161/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3161 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 86719da9b39d 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev

[GitHub] [hadoop] ayushtkn commented on pull request #3160: HDFS-16104. Remove unused parameter and fix java doc for DiskBalancerCLI

2021-07-01 Thread GitBox


ayushtkn commented on pull request #3160:
URL: https://github.com/apache/hadoop/pull/3160#issuecomment-872237581


   Thanx @tomscut for the contribution!!!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn merged pull request #3160: HDFS-16104. Remove unused parameter and fix java doc for DiskBalancerCLI

2021-07-01 Thread GitBox


ayushtkn merged pull request #3160:
URL: https://github.com/apache/hadoop/pull/3160


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #3160: HDFS-16104. Remove unused parameter and fix java doc for DiskBalancerCLI

2021-07-01 Thread GitBox


tomscut commented on pull request #3160:
URL: https://github.com/apache/hadoop/pull/3160#issuecomment-872191677


   Hi @ayushtkn , these UTs are related to the change and work fine locally. 
Could you please take a look? Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17786) Parallelize stages in Jenkins

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17786?focusedWorklogId=617557&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617557
 ]

ASF GitHub Bot logged work on HADOOP-17786:
---

Author: ASF GitHub Bot
Created on: 01/Jul/21 11:46
Start Date: 01/Jul/21 11:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3166:
URL: https://github.com/apache/hadoop/pull/3166#issuecomment-872176783


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  15m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/2/artifact/out/blanks-eol.txt)
 |  The patch has 8 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  47m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3166 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs |
   | uname | Linux e05c3d46624a 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d20f84a6956c5e4c3c4c3d9e7edc43585f8133e6 |
   | Max. process+thread count | 524 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/2/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 617557)
Time Spent: 20m  (was: 10m)

> Parallelize stages in Jenkins
> -
>
> Key: HADOOP-17786
> URL: https://issues.apache.org/jira/browse/HADOOP-17786
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Jenkins now builds for multiple environments as different stages. Need to 
> parallelize them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3166: HADOOP-17786. Parallelize stages in Jenkins

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #3166:
URL: https://github.com/apache/hadoop/pull/3166#issuecomment-872176783


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  15m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/2/artifact/out/blanks-eol.txt)
 |  The patch has 8 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  47m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3166 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs |
   | uname | Linux e05c3d46624a 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d20f84a6956c5e4c3c4c3d9e7edc43585f8133e6 |
   | Max. process+thread count | 524 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3166/2/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhuxiangyi commented on pull request #3155: HDFS-16095. Add lsQuotaList command and getQuotaListing api for hdfs …

2021-07-01 Thread GitBox


zhuxiangyi commented on pull request #3155:
URL: https://github.com/apache/hadoop/pull/3155#issuecomment-872169594


   > It has a potential to hold the fsn/fsd lock for a long time and cause 
service outage or delays.
   
   
   
   > hold the fsn/fsd lock
   
   @kihwal  Thanks for your comment. I tested to obtain quota information for 
10w directories, and its holding lock time was about 300ms. Under normal 
circumstances, our quota list is limited, I think it will not be particularly 
high, this is just my guess.
   In addition, I did a test to get only the Quota directory path, and the 10w 
directory also took 20ms. We first get the path of the quota directory, and 
then get the quota information through getQuotaUsage, is this better?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3164: Fix NPE in Find.java

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #3164:
URL: https://github.com/apache/hadoop/pull/3164#issuecomment-872167935


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 54s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 40s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   2m 36s | 
[/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3164/2/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html)
 |  hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  2s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m  2s |  |  hadoop-mapreduce-examples in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 194m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-common-project/hadoop-common |
   |  |  Nullcheck of expr at line 114 of value previously dereferenced in 
org.apache.hadoop.fs.shell.find.Find.buildDescription(ExpressionFactory)  At 
Find.java:114 of value previously dereferenced in 
org.apache.hadoop.fs.shell.find.Find.buildDescription(ExpressionFactory)  At 
Find.java:[line 114] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3164/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3164 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 6d045ac9e9e0 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dc98af81ce922f62ac0c8dc29c81e75bef636f70 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-op

[GitHub] [hadoop] hadoop-yetus commented on pull request #3165: [Do not commit] Refactor creds in Jenkinsfile

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #3165:
URL: https://github.com/apache/hadoop/pull/3165#issuecomment-872159336


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3165/5/artifact/out/blanks-eol.txt)
 |  The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  47m  0s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3165/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3165 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs |
   | uname | Linux 8484275b7ed9 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 60cdb6b8dd399f9bf2ec65f320c64137030803e5 |
   | Max. process+thread count | 521 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3165/5/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3164: Fix NPE in Find.java

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #3164:
URL: https://github.com/apache/hadoop/pull/3164#issuecomment-872153579


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   2m 35s | 
[/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3164/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html)
 |  hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  16m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 10s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 181m 14s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-common-project/hadoop-common |
   |  |  Nullcheck of expr at line 114 of value previously dereferenced in 
org.apache.hadoop.fs.shell.find.Find.buildDescription(ExpressionFactory)  At 
Find.java:114 of value previously dereferenced in 
org.apache.hadoop.fs.shell.find.Find.buildDescription(ExpressionFactory)  At 
Find.java:[line 114] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3164/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3164 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux d0cda20ee375 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c905fabc5ca69f89cc69f555b69dacdc1c4cbb39 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3164/1/testReport/ |
   | Max. proce

[jira] [Work logged] (HADOOP-17786) Parallelize stages in Jenkins

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17786?focusedWorklogId=617511&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617511
 ]

ASF GitHub Bot logged work on HADOOP-17786:
---

Author: ASF GitHub Bot
Created on: 01/Jul/21 11:04
Start Date: 01/Jul/21 11:04
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra opened a new pull request #3166:
URL: https://github.com/apache/hadoop/pull/3166


   * We now build Hadoop on multiple platforms
  for validation. We need to parallelize them
  for faster validation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 617511)
Remaining Estimate: 0h
Time Spent: 10m

> Parallelize stages in Jenkins
> -
>
> Key: HADOOP-17786
> URL: https://issues.apache.org/jira/browse/HADOOP-17786
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Jenkins now builds for multiple environments as different stages. Need to 
> parallelize them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17786) Parallelize stages in Jenkins

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17786:

Labels: pull-request-available  (was: )

> Parallelize stages in Jenkins
> -
>
> Key: HADOOP-17786
> URL: https://issues.apache.org/jira/browse/HADOOP-17786
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Jenkins now builds for multiple environments as different stages. Need to 
> parallelize them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra opened a new pull request #3166: HADOOP-17786. Parallelize stages in Jenkins

2021-07-01 Thread GitBox


GauthamBanasandra opened a new pull request #3166:
URL: https://github.com/apache/hadoop/pull/3166


   * We now build Hadoop on multiple platforms
  for validation. We need to parallelize them
  for faster validation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17290) ABFS: Add Identifiers to Client Request Header

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17290?focusedWorklogId=617508&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617508
 ]

ASF GitHub Bot logged work on HADOOP-17290:
---

Author: ASF GitHub Bot
Created on: 01/Jul/21 11:02
Start Date: 01/Jul/21 11:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2520:
URL: https://github.com/apache/hadoop/pull/2520#issuecomment-872117658


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 37 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  14m 29s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 11s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  72m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2520/23/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2520 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 093ca10a13ea 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8cf0fe32c89351a55c475cdd91261335591129f8 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2520/23/testReport/ |
   | Max. process+thread count | 543 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U

[GitHub] [hadoop] hadoop-yetus commented on pull request #2520: HADOOP-17290. ABFS: Add Identifiers to Client Request Header

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #2520:
URL: https://github.com/apache/hadoop/pull/2520#issuecomment-872117658


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 37 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  14m 29s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 11s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  72m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2520/23/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2520 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 093ca10a13ea 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8cf0fe32c89351a55c475cdd91261335591129f8 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2520/23/testReport/ |
   | Max. process+thread count | 543 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2520/23/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitH

[GitHub] [hadoop] hadoop-yetus commented on pull request #3165: [Do not commit] Refactor creds in Jenkinsfile

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #3165:
URL: https://github.com/apache/hadoop/pull/3165#issuecomment-872100877


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 48s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  13m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3165/4/artifact/out/blanks-eol.txt)
 |  The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  42m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3165/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3165 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs |
   | uname | Linux 2cffc6bef16f 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c0e4a8378c8cbef3524cb888f1ebcc0b14c826ca |
   | Max. process+thread count | 633 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3165/4/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhuxiong commented on pull request #3162: fix state in applicationhistory web page

2021-07-01 Thread GitBox


zhuxiong commented on pull request #3162:
URL: https://github.com/apache/hadoop/pull/3162#issuecomment-872100143


   Already did end to end test. see laste comment , so not add unit test.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17784) hadoop-aws test bucket woul be deleted on Jul 1, 2021

2021-07-01 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372617#comment-17372617
 ] 

Steve Loughran commented on HADOOP-17784:
-

Uh oh. This is our critical public dataset as it gives us CSV files with no 
setup delays and AWS to pay the download charges.

We use it for testing
* random IO seek policy performance
* list performance with large directories
* S3 select.

This is going to be a disaster. We will have to see if we can find some other 
public store with what we need (a large CSV file that's free to download)

There's always been options to override the path to the CSV file, which was 
done to support private S3 store testing. We'll need to verify that switching 
to this will allow old builds to keep testing happily. Some test suites do skip 
if the target CSV file != that one in the constants (Region checking etc)





> hadoop-aws test bucket woul be deleted on Jul 1, 2021
> -
>
> Key: HADOOP-17784
> URL: https://issues.apache.org/jira/browse/HADOOP-17784
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3, test
>Reporter: Leona Yoda
>Priority: Major
>
> I found an anouncement that landsat-pds buket will be deleted on July 1, 2021
> (https://registry.opendata.aws/landsat-8/)
> and  I think this bucket  is used in th test of hadoop-aws module use
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java#L93]
>  
> At this time I can access the bucket but we might have to change the test 
> bucket in someday.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17784) hadoop-aws landsat-pds test bucket will be deleted on Jul 1, 2021

2021-07-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17784:

Summary: hadoop-aws landsat-pds test bucket will be deleted on Jul 1, 2021  
(was: hadoop-aws test bucket woul be deleted on Jul 1, 2021)

> hadoop-aws landsat-pds test bucket will be deleted on Jul 1, 2021
> -
>
> Key: HADOOP-17784
> URL: https://issues.apache.org/jira/browse/HADOOP-17784
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3, test
>Reporter: Leona Yoda
>Priority: Major
>
> I found an anouncement that landsat-pds buket will be deleted on July 1, 2021
> (https://registry.opendata.aws/landsat-8/)
> and  I think this bucket  is used in th test of hadoop-aws module use
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java#L93]
>  
> At this time I can access the bucket but we might have to change the test 
> bucket in someday.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17784) hadoop-aws test bucket woul be deleted on Jul 1, 2021

2021-07-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17784:

Component/s: (was: tools)
 test
 fs/s3

> hadoop-aws test bucket woul be deleted on Jul 1, 2021
> -
>
> Key: HADOOP-17784
> URL: https://issues.apache.org/jira/browse/HADOOP-17784
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3, test
>Reporter: Leona Yoda
>Priority: Major
>
> I found an anouncement that landsat-pds buket will be deleted on July 1, 2021
> (https://registry.opendata.aws/landsat-8/)
> and  I think this bucket  is used in th test of hadoop-aws module use
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java#L93]
>  
> At this time I can access the bucket but we might have to change the test 
> bucket in someday.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17784) hadoop-aws test bucket woul be deleted on Jul 1, 2021

2021-07-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17784:

Priority: Major  (was: Minor)

> hadoop-aws test bucket woul be deleted on Jul 1, 2021
> -
>
> Key: HADOOP-17784
> URL: https://issues.apache.org/jira/browse/HADOOP-17784
> Project: Hadoop Common
>  Issue Type: Test
>  Components: tools
>Reporter: Leona Yoda
>Priority: Major
>
> I found an anouncement that landsat-pds buket will be deleted on July 1, 2021
> (https://registry.opendata.aws/landsat-8/)
> and  I think this bucket  is used in th test of hadoop-aws module use
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java#L93]
>  
> At this time I can access the bucket but we might have to change the test 
> bucket in someday.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17786) Parallelize stages in Jenkins

2021-07-01 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HADOOP-17786:
---

 Summary: Parallelize stages in Jenkins
 Key: HADOOP-17786
 URL: https://issues.apache.org/jira/browse/HADOOP-17786
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


Jenkins now builds for multiple environments as different stages. Need to 
parallelize them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17785) mvn test failed about hadoop@3.2.1

2021-07-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17785:

Component/s: build

> mvn test failed about hadoop@3.2.1
> --
>
> Key: HADOOP-17785
> URL: https://issues.apache.org/jira/browse/HADOOP-17785
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.1
>Reporter: shixijun
>Priority: Major
>
> {panel:title=mvn test failed about hadoop@3.2.1}
> mvn test failed
> {panel}
> [root@localhost spack-src]# mvn -version
> Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
> Maven home: 
> /home/all_spack_env/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/maven-3.6.3-fpgpwvz7es5yiaz2tez2pnlilrcatuvg
> Java version: 1.8.0_191, vendor: AdoptOpenJdk, runtime: 
> /home/all_spack_env/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/openjdk-1.8.0_191-b12-fidptihybskgklbjoo4lagkacm6n6lod/jre
> Default locale: en_US, platform encoding: ANSI_X3.4-1968
> OS name: "linux", version: "4.18.0-80.el8.aarch64", arch: "aarch64", family: 
> "unix"
> [root@localhost spack-src]# java -version
> openjdk version "1.8.0_191"
> OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_191-b12)
> OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.191-b12, mixed mode)
> [root@localhost spack-src]# mvn test
> ……
> [INFO] Running org.apache.hadoop.tools.TestCommandShell
> [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.111 
> s - in org.apache.hadoop.tools.TestCommandShell
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Failures:
> [ERROR]   
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
>  Should throw IOException
> [ERROR]   
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:288
>  Should throw IOException
> [ERROR]   
> TestFileUtil.testFailFullyDelete:446->validateAndSetWritablePermissions:422 
> The directory xSubDir *should* not have been deleted. expected: but 
> was:
> [ERROR]   
> TestFileUtil.testFailFullyDeleteContents:525->validateAndSetWritablePermissions:422
>  The directory xSubDir *should* not have been deleted. expected: but 
> was:
> [ERROR]   TestFileUtil.testGetDU:571
> [ERROR]   TestFsShellCopy.testPutSrcDirNoPerm:627->shellRun:80 expected:<1> 
> but was:<0>
> [ERROR]   TestFsShellCopy.testPutSrcFileNoPerm:652->shellRun:80 expected:<1> 
> but was:<0>
> [ERROR]   TestLocalDirAllocator.test0:140->validateTempDirCreation:109 
> Checking for build/test/temp/RELATIVE1 in 
> build/test/temp/RELATIVE0/block995011826146306285.tmp - FAILED!
> [ERROR]   TestLocalDirAllocator.test0:140->validateTempDirCreation:109 
> Checking for 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block792666236482175348.tmp
>  - FAILED!
> [ERROR]   TestLocalDirAllocator.test0:141->validateTempDirCreation:109 
> Checking for 
> file:/home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block5124616846677903649.tmp
>  - FAILED!
> [ERROR]   
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:162->validateTempDirCreation:109
>  Checking for build/test/temp/RELATIVE2 in 
> build/test/temp/RELATIVE1/block1176062344115776027.tmp - FAILED!
> [ERROR]   
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:109
>  Checking for 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block3514694215643608527.tmp
>  - FAILED!
> [ERROR]   
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:109
>  Checking for 
> file:/home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block883026101475466701.tmp
>  - F

[jira] [Updated] (HADOOP-17785) mvn test failed about hadoop@3.2.1

2021-07-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17785:

Priority: Minor  (was: Major)

> mvn test failed about hadoop@3.2.1
> --
>
> Key: HADOOP-17785
> URL: https://issues.apache.org/jira/browse/HADOOP-17785
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.1
>Reporter: shixijun
>Priority: Minor
>
> {panel:title=mvn test failed about hadoop@3.2.1}
> mvn test failed
> {panel}
> [root@localhost spack-src]# mvn -version
> Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
> Maven home: 
> /home/all_spack_env/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/maven-3.6.3-fpgpwvz7es5yiaz2tez2pnlilrcatuvg
> Java version: 1.8.0_191, vendor: AdoptOpenJdk, runtime: 
> /home/all_spack_env/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/openjdk-1.8.0_191-b12-fidptihybskgklbjoo4lagkacm6n6lod/jre
> Default locale: en_US, platform encoding: ANSI_X3.4-1968
> OS name: "linux", version: "4.18.0-80.el8.aarch64", arch: "aarch64", family: 
> "unix"
> [root@localhost spack-src]# java -version
> openjdk version "1.8.0_191"
> OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_191-b12)
> OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.191-b12, mixed mode)
> [root@localhost spack-src]# mvn test
> ……
> [INFO] Running org.apache.hadoop.tools.TestCommandShell
> [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.111 
> s - in org.apache.hadoop.tools.TestCommandShell
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Failures:
> [ERROR]   
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
>  Should throw IOException
> [ERROR]   
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:288
>  Should throw IOException
> [ERROR]   
> TestFileUtil.testFailFullyDelete:446->validateAndSetWritablePermissions:422 
> The directory xSubDir *should* not have been deleted. expected: but 
> was:
> [ERROR]   
> TestFileUtil.testFailFullyDeleteContents:525->validateAndSetWritablePermissions:422
>  The directory xSubDir *should* not have been deleted. expected: but 
> was:
> [ERROR]   TestFileUtil.testGetDU:571
> [ERROR]   TestFsShellCopy.testPutSrcDirNoPerm:627->shellRun:80 expected:<1> 
> but was:<0>
> [ERROR]   TestFsShellCopy.testPutSrcFileNoPerm:652->shellRun:80 expected:<1> 
> but was:<0>
> [ERROR]   TestLocalDirAllocator.test0:140->validateTempDirCreation:109 
> Checking for build/test/temp/RELATIVE1 in 
> build/test/temp/RELATIVE0/block995011826146306285.tmp - FAILED!
> [ERROR]   TestLocalDirAllocator.test0:140->validateTempDirCreation:109 
> Checking for 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block792666236482175348.tmp
>  - FAILED!
> [ERROR]   TestLocalDirAllocator.test0:141->validateTempDirCreation:109 
> Checking for 
> file:/home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block5124616846677903649.tmp
>  - FAILED!
> [ERROR]   
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:162->validateTempDirCreation:109
>  Checking for build/test/temp/RELATIVE2 in 
> build/test/temp/RELATIVE1/block1176062344115776027.tmp - FAILED!
> [ERROR]   
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:109
>  Checking for 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block3514694215643608527.tmp
>  - FAILED!
> [ERROR]   
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:109
>  Checking for 
> file:/home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block883026101475466701

[jira] [Moved] (HADOOP-17785) mvn test failed about hadoop@3.2.1

2021-07-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HDFS-16103 to HADOOP-17785:


  Key: HADOOP-17785  (was: HDFS-16103)
Affects Version/s: (was: 3.2.1)
   3.2.1
  Project: Hadoop Common  (was: Hadoop HDFS)

> mvn test failed about hadoop@3.2.1
> --
>
> Key: HADOOP-17785
> URL: https://issues.apache.org/jira/browse/HADOOP-17785
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.1
>Reporter: shixijun
>Priority: Major
>
> {panel:title=mvn test failed about hadoop@3.2.1}
> mvn test failed
> {panel}
> [root@localhost spack-src]# mvn -version
> Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
> Maven home: 
> /home/all_spack_env/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/maven-3.6.3-fpgpwvz7es5yiaz2tez2pnlilrcatuvg
> Java version: 1.8.0_191, vendor: AdoptOpenJdk, runtime: 
> /home/all_spack_env/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/openjdk-1.8.0_191-b12-fidptihybskgklbjoo4lagkacm6n6lod/jre
> Default locale: en_US, platform encoding: ANSI_X3.4-1968
> OS name: "linux", version: "4.18.0-80.el8.aarch64", arch: "aarch64", family: 
> "unix"
> [root@localhost spack-src]# java -version
> openjdk version "1.8.0_191"
> OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_191-b12)
> OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.191-b12, mixed mode)
> [root@localhost spack-src]# mvn test
> ……
> [INFO] Running org.apache.hadoop.tools.TestCommandShell
> [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.111 
> s - in org.apache.hadoop.tools.TestCommandShell
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Failures:
> [ERROR]   
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testGlobStatusThrowsExceptionForUnreadableDir:643
>  Should throw IOException
> [ERROR]   
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:288
>  Should throw IOException
> [ERROR]   
> TestFileUtil.testFailFullyDelete:446->validateAndSetWritablePermissions:422 
> The directory xSubDir *should* not have been deleted. expected: but 
> was:
> [ERROR]   
> TestFileUtil.testFailFullyDeleteContents:525->validateAndSetWritablePermissions:422
>  The directory xSubDir *should* not have been deleted. expected: but 
> was:
> [ERROR]   TestFileUtil.testGetDU:571
> [ERROR]   TestFsShellCopy.testPutSrcDirNoPerm:627->shellRun:80 expected:<1> 
> but was:<0>
> [ERROR]   TestFsShellCopy.testPutSrcFileNoPerm:652->shellRun:80 expected:<1> 
> but was:<0>
> [ERROR]   TestLocalDirAllocator.test0:140->validateTempDirCreation:109 
> Checking for build/test/temp/RELATIVE1 in 
> build/test/temp/RELATIVE0/block995011826146306285.tmp - FAILED!
> [ERROR]   TestLocalDirAllocator.test0:140->validateTempDirCreation:109 
> Checking for 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block792666236482175348.tmp
>  - FAILED!
> [ERROR]   TestLocalDirAllocator.test0:141->validateTempDirCreation:109 
> Checking for 
> file:/home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block5124616846677903649.tmp
>  - FAILED!
> [ERROR]   
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:162->validateTempDirCreation:109
>  Checking for build/test/temp/RELATIVE2 in 
> build/test/temp/RELATIVE1/block1176062344115776027.tmp - FAILED!
> [ERROR]   
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:109
>  Checking for 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block3514694215643608527.tmp
>  - FAILED!
> [ERROR]   
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:109
>  Checking for 
> file:/home/all_spack_env/spack_stage/root/spack-stage-hadoop-3.2.1-xvpobktnlicqhfzwbkriy4cick5tpsab/spack-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
>  in 
> /home/all_spack_env/spack_stage/root/spack-stage

[jira] [Commented] (HADOOP-17779) Lock File System Creator Semaphore Uninterruptibly

2021-07-01 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372608#comment-17372608
 ] 

Steve Loughran commented on HADOOP-17779:
-

* as usual, declare component, version etc
* you got a stack trace of this going wrong?

if semaphore locks become interruptible, what does that mean, especially for 
process shutdown if one thread is blocked waiting on some network response 
which will never happen?

> Lock File System Creator Semaphore Uninterruptibly
> --
>
> Key: HADOOP-17779
> URL: https://issues.apache.org/jira/browse/HADOOP-17779
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{FileSystem}} "creator permits" are acquired in an interruptable way.  This 
> changed the behavior of the call because previous callers were handling the 
> IOException as a critical error.  An interrupt was handled in the typical 
> {{InterruptedException}} way.  Lastly, there was no documentation of this new 
> event so again, callers are not prepared.
> Restore the previous behavior and lock the semaphore Uninterruptibly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-07-01 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated HADOOP-15327:

Attachment: HADOOP-15327.005.patch

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, HADOOP-15327.005.patch, 
> HADOOP-15327.005.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log, 
> testfailure-testMapFileAccess-emptyresponse.zip, 
> testfailure-testReduceFromPartialMem.zip
>
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17773) Avoid using zookeeper deprecated API and classes

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17773?focusedWorklogId=617485&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617485
 ]

ASF GitHub Bot logged work on HADOOP-17773:
---

Author: ASF GitHub Bot
Created on: 01/Jul/21 09:11
Start Date: 01/Jul/21 09:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3163:
URL: https://github.com/apache/hadoop/pull/3163#issuecomment-872071898


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 12s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 49s | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 18s | 
[/patch-mvninstall-hadoop-common-project_hadoop-registry.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-registry.txt)
 |  hadoop-registry in the patch failed.  |
   | -1 :x: |  compile  |   1m 25s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   1m 25s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   1m 15s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   1m 15s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/results-checkstyle-hadoop-common-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/results-checkstyle-hadoop-common-project.txt)
 |  hadoop-common-project: The patch generated 2 new + 368 unchanged - 1 fixed 
= 370 total (was 369)  |
   | -1 :x: |  mvnsite  |   0m 53s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvnsite  |  

[GitHub] [hadoop] hadoop-yetus commented on pull request #3163: HADOOP-17773. Avoid using zookeeper deprecated API and classes.

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #3163:
URL: https://github.com/apache/hadoop/pull/3163#issuecomment-872071898


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 12s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 49s | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 18s | 
[/patch-mvninstall-hadoop-common-project_hadoop-registry.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-registry.txt)
 |  hadoop-registry in the patch failed.  |
   | -1 :x: |  compile  |   1m 25s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   1m 25s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   1m 15s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   1m 15s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/results-checkstyle-hadoop-common-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/results-checkstyle-hadoop-common-project.txt)
 |  hadoop-common-project: The patch generated 2 new + 368 unchanged - 1 fixed 
= 370 total (was 369)  |
   | -1 :x: |  mvnsite  |   0m 53s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvnsite  |   0m 20s | 
[/patch-mvnsite-hadoop-common-project_hadoop-registry.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3163/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-registry.txt)
 |  hadoop-registry in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch pa

[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=617477&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617477
 ]

ASF GitHub Bot logged work on HADOOP-17139:
---

Author: ASF GitHub Bot
Created on: 01/Jul/21 08:48
Start Date: 01/Jul/21 08:48
Worklog Time Spent: 10m 
  Work Description: bogthe commented on a change in pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#discussion_r662103006



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractCopyFromLocalTest.java
##
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.contract;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathExistsException;
+import org.junit.Test;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.nio.file.Files;
+
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+public abstract class AbstractContractCopyFromLocalTest extends
+AbstractFSContractTestBase {
+
+private static final Charset ASCII = StandardCharsets.US_ASCII;
+private File file;
+
+@Override
+public void teardown() throws Exception {
+super.teardown();
+if (file != null) {
+file.delete();
+}
+}
+
+@Test
+public void testCopyEmptyFile() throws Throwable {
+file = File.createTempFile("test", ".txt");
+Path dest = copyFromLocal(file, true);
+assertPathExists("uploaded file", dest);
+}
+
+@Test
+public void testCopyFile() throws Throwable {
+String message = "hello";
+file = createTempFile(message);
+Path dest = copyFromLocal(file, true);
+
+assertPathExists("uploaded file not found", dest);
+// TODO: Should this be assertFileExists?
+assertTrue("source file deleted", Files.exists(file.toPath()));
+
+FileSystem fs = getFileSystem();
+FileStatus status = fs.getFileStatus(dest);
+assertEquals("File length of " + status,
+message.getBytes(ASCII).length, status.getLen());
+assertFileTextEquals(dest, message);
+}
+
+@Test
+public void testCopyFileNoOverwrite() throws Throwable {
+file = createTempFile("hello");
+copyFromLocal(file, true);
+intercept(PathExistsException.class,
+() -> copyFromLocal(file, false));
+}
+
+@Test
+public void testCopyFileOverwrite() throws Throwable {
+file = createTempFile("hello");
+Path dest = copyFromLocal(file, true);
+String updated = "updated";
+FileUtils.write(file, updated, ASCII);
+copyFromLocal(file, true);
+assertFileTextEquals(dest, updated);
+}
+
+@Test
+public void testCopyMissingFile() throws Throwable {
+file = createTempFile("test");
+file.delete();
+// first upload to create
+intercept(FileNotFoundException.class, "",
+() -> copyFromLocal(file, true));
+}
+
+@Test
+public void testSourceIsFileAndDelSrcTrue() throws Throwable {
+describe("Source is a file delSrc flag is set to true");
+
+file = createTempFile("test");
+copyFromLocal(file, false, true);
+
+assertFalse("uploaded file", Files.exists(file.toPath()));
+}
+
+@Test
+public void testSourceIsFileAndDestinationIsDirectory() throws Throwable {
+describe("Source is a file and destination is a directory. File" +
+"should be copied inside the directory.");
+
+file = createTempFile("test");
+Path source = new Path(file.toURI());
+FileSystem fs = getFileSystem();
+
+File dir = createTempDirectory("test");
+Path destination = fileToPath

[GitHub] [hadoop] bogthe commented on a change in pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-01 Thread GitBox


bogthe commented on a change in pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#discussion_r662103006



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractCopyFromLocalTest.java
##
@@ -0,0 +1,285 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.contract;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathExistsException;
+import org.junit.Test;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
+import java.nio.file.Files;
+
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+public abstract class AbstractContractCopyFromLocalTest extends
+AbstractFSContractTestBase {
+
+private static final Charset ASCII = StandardCharsets.US_ASCII;
+private File file;
+
+@Override
+public void teardown() throws Exception {
+super.teardown();
+if (file != null) {
+file.delete();
+}
+}
+
+@Test
+public void testCopyEmptyFile() throws Throwable {
+file = File.createTempFile("test", ".txt");
+Path dest = copyFromLocal(file, true);
+assertPathExists("uploaded file", dest);
+}
+
+@Test
+public void testCopyFile() throws Throwable {
+String message = "hello";
+file = createTempFile(message);
+Path dest = copyFromLocal(file, true);
+
+assertPathExists("uploaded file not found", dest);
+// TODO: Should this be assertFileExists?
+assertTrue("source file deleted", Files.exists(file.toPath()));
+
+FileSystem fs = getFileSystem();
+FileStatus status = fs.getFileStatus(dest);
+assertEquals("File length of " + status,
+message.getBytes(ASCII).length, status.getLen());
+assertFileTextEquals(dest, message);
+}
+
+@Test
+public void testCopyFileNoOverwrite() throws Throwable {
+file = createTempFile("hello");
+copyFromLocal(file, true);
+intercept(PathExistsException.class,
+() -> copyFromLocal(file, false));
+}
+
+@Test
+public void testCopyFileOverwrite() throws Throwable {
+file = createTempFile("hello");
+Path dest = copyFromLocal(file, true);
+String updated = "updated";
+FileUtils.write(file, updated, ASCII);
+copyFromLocal(file, true);
+assertFileTextEquals(dest, updated);
+}
+
+@Test
+public void testCopyMissingFile() throws Throwable {
+file = createTempFile("test");
+file.delete();
+// first upload to create
+intercept(FileNotFoundException.class, "",
+() -> copyFromLocal(file, true));
+}
+
+@Test
+public void testSourceIsFileAndDelSrcTrue() throws Throwable {
+describe("Source is a file delSrc flag is set to true");
+
+file = createTempFile("test");
+copyFromLocal(file, false, true);
+
+assertFalse("uploaded file", Files.exists(file.toPath()));
+}
+
+@Test
+public void testSourceIsFileAndDestinationIsDirectory() throws Throwable {
+describe("Source is a file and destination is a directory. File" +
+"should be copied inside the directory.");
+
+file = createTempFile("test");
+Path source = new Path(file.toURI());
+FileSystem fs = getFileSystem();
+
+File dir = createTempDirectory("test");
+Path destination = fileToPath(dir);
+fs.delete(destination, false);
+mkdirs(destination);
+
+fs.copyFromLocalFile(source, destination);
+System.out.println("Did this work?");
+}
+
+@Test
+public void testSrcIsDirWithFilesAndCopySuccessful() throws Throwable {
+describe("Source is a directory with files, copy should copy all" +
+" dir contents to source");
+String firstChild = "childO

[GitHub] [hadoop] GauthamBanasandra opened a new pull request #3165: [Do not commit] Refactor creds in Jenkinsfile

2021-07-01 Thread GitBox


GauthamBanasandra opened a new pull request #3165:
URL: https://github.com/apache/hadoop/pull/3165


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3162: fix state in applicationhistory web page

2021-07-01 Thread GitBox


hadoop-yetus commented on pull request #3162:
URL: https://github.com/apache/hadoop/pull/3162#issuecomment-872036323


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 59s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  88m 24s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3162/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3162 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 1d272b71b5bf 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e3b24abad215909f386ea5c38574948895069b4e |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3162/1/testReport/ |
   | Max. process+thread count | 523 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3162/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp

[GitHub] [hadoop] containerAnalyzer commented on pull request #3164: Fix NPE in Find.java

2021-07-01 Thread GitBox


containerAnalyzer commented on pull request #3164:
URL: https://github.com/apache/hadoop/pull/3164#issuecomment-872034803


   Hello,
   There is another NPE in DancingLinks.java. The patch is also provided in the 
pr. Here is the bug trace.
   
   1. Return null to caller 
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java#L393
   
   2. Function advance executes and returns
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java#L419
   
   3. Function add executes and choices contains null value
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java#L419
   
   4. Function get executes and returns
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java#L423
   
   5. The return value of function get is used as the 1st parameter in function 
rollback (the return value of function get can be null)
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java#L423
   
   6. Get the value of row.left, which will leak to null pointer dereference
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java#L401
   
   
   Commit: 986d0a4f1d5543fa0b4f5916729728f78b4acec9
   
   
   ContainerAnalyzer


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] containerAnalyzer opened a new pull request #3164: Fix NPE in Find.java

2021-07-01 Thread GitBox


containerAnalyzer opened a new pull request #3164:
URL: https://github.com/apache/hadoop/pull/3164


   Hello,
   Our static analyzer found a following potential NPE. We have checked the 
feasibility of this execution trace. It is necessary to defend this 
vulnerability to improve the code quality. We have provided the patch for you. 
Please check and confirm it.
   
   Here is the bug trace.
   
   1. Select the false branch at this point (expressionClass==null is true), 
and null assigned to instance
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/ExpressionFactory.java#L129-L133
   
   2. Return instance to caller, which can be null (The return value can be 
null)
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/ExpressionFactory.java#L133
   
   3. Function createExpression executes and stores the return value to expr 
(expr can be null)
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/Find.java#L113
   
   4. Function add executes and primaries contains null value
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/Find.java#L117
   
   5. Function next executes and stores the return value to expr (expr can be 
null)
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/Find.java#L139
   
   6. expr is passed as the this pointer to function getUsage (expr can be 
null), which will leak to null pointer dereference
   
https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/Find.java#L140
   
   
   Commit: 986d0a4f1d5543fa0b4f5916729728f78b4acec9
   
   
   
   
   ContainerAnalyzer


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17773) Avoid using zookeeper deprecated API and classes

2021-07-01 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372463#comment-17372463
 ] 

Surendra Singh Lilhore commented on HADOOP-17773:
-

Raised PR. This fix help to compile zookeeper with zookeeper-3.6.2.

> Avoid using zookeeper deprecated API and classes
> 
>
> Key: HADOOP-17773
> URL: https://issues.apache.org/jira/browse/HADOOP-17773
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In latest version of zookeeper some internal classes are removed which is 
> used in hadoop test code, for example ServerCnxnFactoryAccessor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17773) Avoid using zookeeper deprecated API and classes

2021-07-01 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372463#comment-17372463
 ] 

Surendra Singh Lilhore edited comment on HADOOP-17773 at 7/1/21, 7:18 AM:
--

Raised PR. This fix help to compile hadoop with zookeeper-3.6.2.


was (Author: surendrasingh):
Raised PR. This fix help to compile zookeeper with zookeeper-3.6.2.

> Avoid using zookeeper deprecated API and classes
> 
>
> Key: HADOOP-17773
> URL: https://issues.apache.org/jira/browse/HADOOP-17773
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In latest version of zookeeper some internal classes are removed which is 
> used in hadoop test code, for example ServerCnxnFactoryAccessor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17773) Avoid using zookeeper deprecated API and classes

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17773:

Labels: pull-request-available  (was: )

> Avoid using zookeeper deprecated API and classes
> 
>
> Key: HADOOP-17773
> URL: https://issues.apache.org/jira/browse/HADOOP-17773
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In latest version of zookeeper some internal classes are removed which is 
> used in hadoop test code, for example ServerCnxnFactoryAccessor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17773) Avoid using zookeeper deprecated API and classes

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17773?focusedWorklogId=617432&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617432
 ]

ASF GitHub Bot logged work on HADOOP-17773:
---

Author: ASF GitHub Bot
Created on: 01/Jul/21 07:13
Start Date: 01/Jul/21 07:13
Worklog Time Spent: 10m 
  Work Description: surendralilhore opened a new pull request #3163:
URL: https://github.com/apache/hadoop/pull/3163


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 617432)
Remaining Estimate: 0h
Time Spent: 10m

> Avoid using zookeeper deprecated API and classes
> 
>
> Key: HADOOP-17773
> URL: https://issues.apache.org/jira/browse/HADOOP-17773
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In latest version of zookeeper some internal classes are removed which is 
> used in hadoop test code, for example ServerCnxnFactoryAccessor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] surendralilhore opened a new pull request #3163: HADOOP-17773. Avoid using zookeeper deprecated API and classes.

2021-07-01 Thread GitBox


surendralilhore opened a new pull request #3163:
URL: https://github.com/apache/hadoop/pull/3163


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org