GitHub OAuth token

2020-07-27 Thread Akira Ajisaka
Hi Steve,

Would you register GitHub hadoop-yetus account's OAuth token to the new
Jenkins servers?
https://ci-hadoop.apache.org/

I'd like to migrate the hadoop-multibranch job and the token is needed.

Thanks and regards,
Akira


[jira] [Resolved] (HADOOP-17155) DF implementation of CachingGetSpaceUsed makes DFS Used size not correct

2020-07-27 Thread angerszhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angerszhu resolved HADOOP-17155.

Resolution: Not A Problem

> DF implementation of CachingGetSpaceUsed makes DFS Used size not correct
> 
>
> Key: HADOOP-17155
> URL: https://issues.apache.org/jira/browse/HADOOP-17155
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: angerszhu
>Priority: Minor
> Attachments: HADOOP-17155.1.patch
>
>
> When   we calculate DN's storage used, we add each Volume's used size 
> together and each volume's size comes from it's BP's size. 
> When we use DF instead of DU, we know that DF check disk space usage (not 
> disk size of a directory). so when check BP dir path,  What you're actually 
> checking is the corresponding disk directory space. 
>  
> When we use this with federation, under each volume  may have more than one 
> BP, each BP return it's corresponding disk directory space. 
>  
> If we have two BP under one volume, we will make DN's storage info's Used 
> size double than real size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Progress

2020-07-27 Thread Brahma Reddy Battula
Looks still the problem with yetus isn't solved even after docker is
running and has the permissions.

As of now, I coped the old script from the build(
https://ci-hadoop.apache.org/job/hadoop-qbt-linux-ARM-trunk/8/console).

Currently it's running.
https://ci-hadoop.apache.org/job/hadoop-qbt-linux-ARM-trunk/8/console

On Sun, Jul 26, 2020 at 10:05 PM Akira Ajisaka  wrote:

> Hi Gavin,
>
> Thank you for moving the arm nodes. Would you install and start the Docker
> daemon on arm nodes?
>
> ```
> Got permission denied while trying to connect to the Docker daemon socket
> at unix:///var/run/docker.sock: Get
> http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version:
> dial unix /var/run/docker.sock: connect: permission denied
>
> ```
>
> https://ci-hadoop.apache.org/job/hadoop-qbt-linux-ARM-trunk/3/console
>
> Thanks,
> Akira
>
> On Thu, Jul 23, 2020 at 11:47 AM Zhenyu Zheng 
> wrote:
>
> > Thanks for doing this, I was going to post in the JIRA about this
> yesterday
> > but was disturbed by something,
> > please help us migrate the ARM nodes.
> >
> > BR,
> >
> > On Thu, Jul 23, 2020 at 1:40 AM Gavin McDonald 
> > wrote:
> >
> > > Hi ,
> > >
> > > On Wed, Jul 22, 2020 at 7:00 PM Akira Ajisaka 
> > wrote:
> > >
> > > > +CC: common-dev@hadoop
> > > >
> > > > Hi Gavin,
> > > >
> > > > Thank you for the reminder.
> > > > I have one question. When ARM servers will be moved to ci-hadoop?
> > Hadoop
> > > > and HBase have daily jobs that run on ARM servers.
> > > >
> > > > I'll migrate the other Hadoop Jenkins jobs to ci-hadoop this weekend.
> > > >
> > >
> > > Thanks.
> > >
> > > I'll move 2 of the ARM nodes tomorrow
> > >
> > > HTH
> > >
> > >
> > > >
> > > > Regards,
> > > > Akira
> > > >
> > > > On Wed, Jul 22, 2020 at 8:39 PM Gavin McDonald  >
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > Seems there is still not much happening in the way of migrating to
> > > > > ci-hadoop
> > > > >
> > > > > Do you need any more information from me, and help needed?
> > > > >
> > > > > The cut off date is 15th August and builds.a.o will be turned off
> on
> > > that
> > > > > date.
> > > > >
> > > > > --
> > > > >
> > > > > *Gavin McDonald*
> > > > > Systems Administrator
> > > > > ASF Infrastructure Team
> > > > >
> > > >
> > >
> > >
> > > --
> > >
> > > *Gavin McDonald*
> > > Systems Administrator
> > > ASF Infrastructure Team
> > >
> >
>


-- 



--Brahma Reddy Battula


[jira] [Created] (HADOOP-17156) Clear readahead requests on stream close

2020-07-27 Thread Rajesh Balamohan (Jira)
Rajesh Balamohan created HADOOP-17156:
-

 Summary: Clear readahead requests on stream close
 Key: HADOOP-17156
 URL: https://issues.apache.org/jira/browse/HADOOP-17156
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Rajesh Balamohan


It would be good to close/clear pending read ahead requests on stream close().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13221) s3a create() doesn't check for an ancestor path being a file

2020-07-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13221.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

I'm going to WONTFIX this -it's too expensive and nobody has really noticed we 
break one of the fundamental assumptions about storage

> s3a create() doesn't check for an ancestor path being a file
> 
>
> Key: HADOOP-13221
> URL: https://issues.apache.org/jira/browse/HADOOP-13221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-13321-test.patch
>
>
> Seen in a code review. Notable that if true, this got by all the FS contract 
> tests —showing we missed a couple.
> {{S3AFilesystem.create()}} does not examine its parent paths to verify that 
> there does not exist one which is a file. It looks for the destination path 
> if overwrite=false (see HADOOP-13188 for issues there), but it doesn't check 
> the parent for not being a file, or the parent of that path.
> It must go up the tree, verifying that either a path does not exist, or that 
> the path is a directory. The scan can stop at the first entry which is is a 
> directory, thus the operation is O(empty-directories) and not O(directories).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] New Apache Hadoop PMC member : Ayush Saxena

2020-07-27 Thread hemanth boyina
Congratulations Ayush !

On Mon, 27 Jul 2020, 10:22 Fengnan Li,  wrote:

> Well deserved! Congratulations!
>
> On 7/24/20, 8:15 AM, "Chao Sun"  wrote:
>
> Congratulations Ayush!
>
> On Fri, Jul 24, 2020 at 6:43 AM Rahul Gupta 
> wrote:
>
> > Congratulations
> >
> > On Thu, Jul 23, 2020 at 8:02 PM HarshaKiran Reddy Boreddy <
> > bharsh...@gmail.com> wrote:
> >
> > > Congratulations Ayush Saxena..!!!
> > >
> > > On Thu, Jul 23, 2020, 8:39 AM Xiaoqiao He 
> wrote:
> > >
> > > > Congratulations Ayush!
> > > >
> > > > Regards,
> > > > He Xiaoqiao
> > > >
> > > > On Thu, Jul 23, 2020 at 9:18 AM Sheng Liu <
> liusheng2...@gmail.com>
> > > wrote:
> > > >
> > > > > Congrats, and thanks for your help.
> > > > >
> > > > > Sree Vaddi  于2020年7月23日周四
> 上午3:59写道:
> > > > >
> > > > > > Congratulations, Ayush.Keep up the good work.
> > > > > >
> > > > > >
> > > > > > Thank you./Sree
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Wednesday, July 22, 2020, 12:48:55 PM PDT, Team AMR <
> > > > > > teamamr.apa...@gmail.com> wrote:
> > > > > >
> > > > > >  Congrats
> > > > > >
> > > > > >
> > > > > > On Thu, Jul 23, 2020 at 1:10 AM Vinayakumar B <
> > > vinayakum...@apache.org
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I am very glad to announce that Ayush Saxena was voted to
> join
> > > Apache
> > > > > > > Hadoop PMC.
> > > > > > >
> > > > > > > Congratulations Ayush! Well deserved and thank you for your
> > > > dedication
> > > > > to
> > > > > > > the project. Please keep up the good work.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > -Vinay
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HADOOP-17157) S3A rename operation not the same with HDFS

2020-07-27 Thread Jiajia Li (Jira)
Jiajia Li created HADOOP-17157:
--

 Summary: S3A rename operation not the same with HDFS
 Key: HADOOP-17157
 URL: https://issues.apache.org/jira/browse/HADOOP-17157
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Jiajia Li


When I run the test ITestS3ADeleteManyFiles, I change the 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADeleteManyFiles.java#L97]

to 

{code}

fs.mkdirs(finalDir);

{code}

So before rename operator, "finalParent/final" has been created.

But after the rename operation,  all the files will be moved from 
"srcParent/src" to "finalParent/final"

So this is not the same with the HDFS rename operation:

HDFS rename includes the calculation of the destination path. If the 
destination exists and is a directory, the final destination of the rename 
becomes the destination + the filename of the source path.
let dest = if (isDir(FS, src) and d != src) :
d + [filename(src)]
else :
d



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-07-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/7/

No changes


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/216/

[Jul 26, 2020 3:51:44 PM] (noreply) YARN-10367. Failed to get nodejs 10.21.0 
when building docker image (#2171)
[Jul 26, 2020 4:55:04 PM] (Akira Ajisaka) YARN-10362. Javadoc for 
TimelineReaderAuthenticationFilterInitializer is broken. Contributed by Xieming 
Li.


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-17158) Intermittent test timeout for ITestAbfsInputStreamStatistics#testReadAheadCounters

2020-07-27 Thread Mehakmeet Singh (Jira)
Mehakmeet Singh created HADOOP-17158:


 Summary: Intermittent test timeout for 
ITestAbfsInputStreamStatistics#testReadAheadCounters
 Key: HADOOP-17158
 URL: https://issues.apache.org/jira/browse/HADOOP-17158
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Mehakmeet Singh
Assignee: Mehakmeet Singh


Intermittent test timeout for 
ITestAbfsInputStreamStatistics#testReadAheadCounters happening due to race 
conditions in readAhead threads.

Test error:


{code:java}
[ERROR] 
testReadAheadCounters(org.apache.hadoop.fs.azurebfs.ITestAbfsInputStreamStatistics)
  Time elapsed: 30.723 s  <<< 
ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 3 
milliseconds    at java.lang.Thread.sleep(Native Method)    at 
org.apache.hadoop.fs.azurebfs.ITestAbfsInputStreamStatistics.testReadAheadCounters(ITestAbfsInputStreamStatistics.java:346)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)    at 
java.lang.Thread.run(Thread.java:748) {code}
Possible Reasoning:

- ReadAhead queue doesn't get completed and hence the counter values are not 
satisfied in 30 seconds time for some systems.

- The condition that readAheadBytesRead and remoteBytesRead counter values need 
to be greater than or equal to 4KB and 32KB respectively doesn't occur in some 
machines due to the fact that sometimes instead of reading for readAhead 
Buffer, remote reads are performed due to Threads still being in the readAhead 
queue to fill that buffer. Thus resulting in either of the 2 counter values to 
be not satisfying the condition and getting in an infinite loop and hence 
timing out the test eventually.

Possible Fixes:

- Write better test(That would pass under all conditions).
- Maybe UT instead of IT?

Possible fix to better the test would be preferable and UT as the last resort.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/aarch64

2020-07-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-linux-ARM-trunk/8/

[Jul 27, 2020 6:53:21 AM] (pjoseph) YARN-10366. Fix Yarn rmadmin help message 
shows two labels for one node for --replaceLabelsOnNode.




-1 overall


The following subsystems voted -1:
docker


Powered by Apache Yetushttps://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.1.4 (RC4)

2020-07-27 Thread Steve Loughran
 +1

did a cloudstore clean build and test
did as well as I could with a spark build.

For anyone having maven problems building hadoop on a mac, homebrew now
forces its version of maven to use a homebrew specific openjdk 11
(one /usr/libexec/java_home doesn't locate); bits of the hadoop build don't
work if maven is running on java 11. Removing the homebrew maven fixes up
that but now the bits of the spark maven build which call out to the SBT
build tool are going out of memory. This is one of those times when I think
"I need a linux box'

Anyway: maven builds are happy, spark compiles with the branch once you
change its guava version. Well, that's progress

-Steve


Re: [VOTE] Release Apache Hadoop 3.1.4 (RC4)

2020-07-27 Thread Dinesh Chitlangia
+1

- Built from source
- Verified checksum and signatures
- Deployed 3 node cluster
- Able to submit and complete an example mapreduce job

Thanks Gabor for organizing the release.

Regards,
Dinesh

On Tue, Jul 21, 2020 at 8:52 AM Gabor Bota  wrote:

> Hi folks,
>
> I have put together a release candidate (RC4) for Hadoop 3.1.4.
>
> *
> The RC includes in addition to the previous ones:
> * fix for HDFS-15313. Ensure inodes in active filesystem are not
> deleted during snapshot delete
> * fix for YARN-10347. Fix double locking in
> CapacityScheduler#reinitialize in branch-3.1
> (https://issues.apache.org/jira/browse/YARN-10347)
> * the revert of HDFS-14941, as it caused
> HDFS-15421. IBR leak causes standby NN to be stuck in safe mode.
> (https://issues.apache.org/jira/browse/HDFS-15421)
> * HDFS-15323, as requested.
> (https://issues.apache.org/jira/browse/HDFS-15323)
> *
>
> The RC is available at: http://people.apache.org/~gabota/hadoop-3.1.4-RC4/
> The RC tag in git is here:
> https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC4
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1275/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> and http://keys.gnupg.net/pks/lookup?op=get&search=0xB86249D83539B38C
>
> Please try the release and vote. The vote will run for 8 weekdays,
> until July 31. 2020. 23:00 CET.
>
>
> Thanks,
> Gabor
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HADOOP-17159) Ability for forceful relogin in UserGroupInformation class

2020-07-27 Thread Sandeep Guggilam (Jira)
Sandeep Guggilam created HADOOP-17159:
-

 Summary: Ability for forceful relogin in UserGroupInformation class
 Key: HADOOP-17159
 URL: https://issues.apache.org/jira/browse/HADOOP-17159
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sandeep Guggilam


Currently we have a relogin() method in UGI which attempts to login if there is 
no login attempted in the last 10 minutes or configured amount of time

We should also have provision for doing a forceful relogin irrespective of the 
time window that the client can choose to use it if needed . Consider the below 
scenario:
 # SASL Server is reimaged and new keytabs are fetched with refreshing the 
password
 # SASL client connection to the server would fail when it tries with the 
cached service ticket
 # We should try to logout to clear the service tickets in cache and then try 
to login back in such scenarios. But since the current relogin() doesn't 
guarantee a login, it could cause an issue
 # A forceful relogin in this case would help after logout

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17160) ITestAbfsInputStreamStatistics#testReadAheadCounters timing out always

2020-07-27 Thread Bilahari T H (Jira)
Bilahari T H created HADOOP-17160:
-

 Summary: ITestAbfsInputStreamStatistics#testReadAheadCounters 
timing out always
 Key: HADOOP-17160
 URL: https://issues.apache.org/jira/browse/HADOOP-17160
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Bilahari T H


The test ITestAbfsInputStreamStatistics#testReadAheadCounters timing out always 
is timing out always



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17161) Make ipc.Client.stop() sleep configurable

2020-07-27 Thread Ramesh Kumar Thangarajan (Jira)
Ramesh Kumar Thangarajan created HADOOP-17161:
-

 Summary: Make ipc.Client.stop() sleep configurable
 Key: HADOOP-17161
 URL: https://issues.apache.org/jira/browse/HADOOP-17161
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ramesh Kumar Thangarajan


After identifying HADOOP-16126 might cause issues in few workloads, 
ipc.Client.stop() sleep was identified to be configurable to better suit 
multiple workloads



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org