[jira] [Created] (HADOOP-15771) Update the link to HowToContribute page

2018-09-18 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15771:
--

 Summary: Update the link to HowToContribute page
 Key: HADOOP-15771
 URL: https://issues.apache.org/jira/browse/HADOOP-15771
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, site
Reporter: Akira Ajisaka


HowToContribute page is moved to 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute.

However, the new Apache Hadoop web site still links to the old page.
{noformat:title=config.toml}
[[menu.main]]
  name = "How to Contribute"
  url = "https://wiki.apache.org/hadoop/HowToContribute;
  parent = "development"
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15770) Merge HADOOP-15407 to trunk

2018-09-18 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-15770:
--

 Summary: Merge HADOOP-15407 to trunk
 Key: HADOOP-15770
 URL: https://issues.apache.org/jira/browse/HADOOP-15770
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sean Mackrory
Assignee: Sean Mackrory


I started a [VOTE] thread last night. Will be posting a patch of the current 
branch state to get pre-commit checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.5 (RC0)

2018-09-18 Thread 俊平堵
Hey Marton,
 The new release web-site actually doesn't work for me.  When I follow
your steps in wiki, and hit the issue during git clone repository
(writable) for hadoop-site as below:

git clone https://gitbox.apache.org/repos/asf/hadoop-site.git
Cloning into 'hadoop-site'...
remote: Counting objects: 252414, done.
remote: Compressing objects: 100% (29625/29625), done.
remote: Total 252414 (delta 219617), reused 252211 (delta 219422)
Receiving objects: 100% (252414/252414), 98.78 MiB | 3.32 MiB/s, done.
Resolving deltas: 100% (219617/219617), done.
warning: remote HEAD refers to nonexistent ref, unable to checkout.

Can you check above repository is correct for clone?
I can clone readable repository (https://github.com/apache/hadoop-site)
successfully though but cannot push back changes which is expected.

Thanks,

Junping

Elek, Marton 于2018年9月17日 周一上午6:15写道:

> Hi Junping,
>
> Thank you to work on this release.
>
> This release is the first release after the hadoop site change, and I
> would like to be sure that everything works fine.
>
> Unfortunately I didn't get permission to edit the old wiki, but this is
> definition of the site update on the new wiki:
>
>
> https://cwiki.apache.org/confluence/display/HADOOP/How+to+generate+and+push+ASF+web+site+after+HADOOP-14163
>
> Please let me know if something is not working for you...
>
> Thanks,
> Marton
>
>
> On 09/10/2018 02:00 PM, 俊平堵 wrote:
> > Hi all,
> >
> >   I've created the first release candidate (RC0) for Apache
> > Hadoop 2.8.5. This is our next point release to follow up 2.8.4. It
> > includes 33 important fixes and improvements.
> >
> >
> >  The RC artifacts are available at:
> > http://home.apache.org/~junping_du/hadoop-2.8.5-RC0
> >
> >
> >  The RC tag in git is: release-2.8.5-RC0
> >
> >
> >
> >  The maven artifacts are available via repository.apache.org<
> > http://repository.apache.org> at:
> >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1140
> >
> >
> >  Please try the release and vote; the vote will run for the usual 5
> working
> > days, ending on 9/15/2018 PST time.
> >
> >
> > Thanks,
> >
> >
> > Junping
> >
>


[jira] [Reopened] (HADOOP-15702) ABFS: Increase timeout of ITestAbfsReadWriteAndSeek

2018-09-18 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-15702:
-

actually, I have just seen it
{code}
[ERROR] 
testReadAndWriteWithDifferentBufferSizesAndSeek[Size=104,857,600](org.apache.hadoop.fs.azurebfs.ITestAbfsReadWriteAndSeek)
  Time elapsed: 1,800.07 s  <<< ERROR!
java.lang.Exception: test timed out after 180 milliseconds
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429)
at java.util.concurrent.FutureTask.get(FutureTask.java:191)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.flushWrittenBytesToService(AbfsOutputStream.java:291)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.flushInternal(AbfsOutputStream.java:247)
at 
org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.close(AbfsOutputStream.java:230)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at 
org.apache.hadoop.fs.azurebfs.ITestAbfsReadWriteAndSeek.testReadWriteAndSeek(ITestAbfsReadWriteAndSeek.java:75)
at 
org.apache.hadoop.fs.azurebfs.ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek(ITestAbfsReadWriteAndSeek.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

> ABFS: Increase timeout of ITestAbfsReadWriteAndSeek
> ---
>
> Key: HADOOP-15702
> URL: https://issues.apache.org/jira/browse/HADOOP-15702
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: HADOOP-15407
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Fix For: HADOOP-15407
>
>
> ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek 
> fails for me all the time. Let's increase the timout limit.
> It also seems to get executed twice...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-09-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/900/

[Sep 17, 2018 4:37:45 AM] (sunilg) YARN-8715. Make allocation tags in the 
placement spec optional for
[Sep 17, 2018 2:15:24 PM] (wwei) YARN-8782. Fix exception message in
[Sep 17, 2018 4:21:54 PM] (nanda) HDDS-399. Persist open pipeline information 
across SCM restart.
[Sep 17, 2018 5:08:23 PM] (aengineer) HDFS-13919. Documentation: Improper 
formatting in Disk Balancer for
[Sep 17, 2018 5:46:28 PM] (aengineer) HDDS-435. Enhance the existing ozone 
documentation. Contributed by Elek,
[Sep 17, 2018 6:49:09 PM] (bharat) HDDS-463. Fix the release packaging of the 
ozone distribution.
[Sep 17, 2018 9:40:08 PM] (stevel) HADOOP-15754. s3guard: 
testDynamoTableTagging should clear existing
[Sep 17, 2018 9:41:17 PM] (aengineer) HDDS-475. Block Allocation returns same 
BlockID on different keys
[Sep 17, 2018 9:42:03 PM] (inigoiri) HDFS-13844. Fix the fmt_bytes function in 
the dfs-dust.js. Contributed
[Sep 17, 2018 11:21:10 PM] (arp) HDDS-487. Doc files are missing ASF license 
headers. Contributed by
[Sep 18, 2018 12:13:52 AM] (bharat) HDDS-352. Separate install and testing 
phases in acceptance tests.
[Sep 18, 2018 12:32:27 AM] (aajisaka) [JDK10] Upgrade Maven Javadoc Plugin from 
3.0.0-M1 to 3.0.1.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 195] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 195] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem 
   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.server.namenode.sps.TestBlockStorageMovementAttemptedItems 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/900/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/900/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/900/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/900/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/900/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/900/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/900/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/900/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/900/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/900/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   

[jira] [Created] (HADOOP-15769) ABFS: distcp tests are always skipped

2018-09-18 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15769:
---

 Summary: ABFS: distcp tests are always skipped
 Key: HADOOP-15769
 URL: https://issues.apache.org/jira/browse/HADOOP-15769
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure, test
Affects Versions: HADOOP-15407
Reporter: Steve Loughran
Assignee: Steve Loughran


the distcp contract tests for ABFS exist, but they aren't quite wired up 
completely, so are always being skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15702) ABFS: Increase timeout of ITestAbfsReadWriteAndSeek

2018-09-18 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15702.
-
   Resolution: Cannot Reproduce
Fix Version/s: HADOOP-15407

> ABFS: Increase timeout of ITestAbfsReadWriteAndSeek
> ---
>
> Key: HADOOP-15702
> URL: https://issues.apache.org/jira/browse/HADOOP-15702
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: HADOOP-15407
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Fix For: HADOOP-15407
>
>
> ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek 
> fails for me all the time. Let's increase the timout limit.
> It also seems to get executed twice...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15768) Do not use Time#now to calculate the rpc queue time duration

2018-09-18 Thread Yiqun Lin (JIRA)
Yiqun Lin created HADOOP-15768:
--

 Summary: Do not use Time#now to calculate the rpc queue time 
duration
 Key: HADOOP-15768
 URL: https://issues.apache.org/jira/browse/HADOOP-15768
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Yiqun Lin
Assignee: Ryan Wu


For the rpc queue time calculation, we are using a not-recommended way: Using 
{{Time#now}} to calculate for this.
{code}
// Invoke the protocol method
long startTime = Time.now();
int qTime = (int) (startTime-receivedTime);

server.updateMetrics(detailedMetricsName, qTime, processingTime, false);
{code} 

We should use {{Time#monotonicNow()}} instead. This JIRA will fix these across 
RpcEngine impl classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Merge HADOOP-15407 to trunk

2018-09-18 Thread Steve Loughran
+1 (Binding)

Ive been testing this; the current branch is rebased to trunk and all the new 
tests are happy.


the connector is as good as any of the connectors get before they are ready for 
people to play with: there are always surprises in the wild, usually network 
and configuration —those we have to wait and see what happens,






> On 18 Sep 2018, at 04:10, Sean Mackrory  wrote:
> 
> All,
> 
> I would like to propose that HADOOP-15407 be merged to trunk. As described
> in that JIRA, this is a complete reimplementation of the current
> hadoop-azure storage driver (WASB) with some significant advantages. The
> impact outside of that module is very limited, however, and it appears that
> on-going improvements will continue to be so. The tests have been stable
> for some time, and I believe we've reached the point of being ready for
> broader feedback and to continue incremental improvements in trunk.
> HADOOP-15407 was rebased on trunk today and I had a successful test run.
> 
> I'd like to call out the contributions of Thomas Marquardt, Da Zhou, and Steve
> Loughran who have all contributed significantly to getting this branch to
> its current state. Numerous other developers are named in the commit log
> and the JIRA.
> 
> I'll start us off:
> 
> +1 (binding)



[jira] [Created] (HADOOP-15767) [JDK10] Building native package on JDK10 fails due to missing javah

2018-09-18 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HADOOP-15767:
-

 Summary: [JDK10] Building native package on JDK10 fails due to 
missing javah
 Key: HADOOP-15767
 URL: https://issues.apache.org/jira/browse/HADOOP-15767
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


This is the error log.
{noformat}
[ERROR] Failed to execute goal 
org.codehaus.mojo:native-maven-plugin:1.0-alpha-8:javah (default) on project 
hadoop-common: Error running javah command: Error executing command line. Exit 
code:127 -> [Help 1]
{noformat}

See also: https://github.com/mojohaus/maven-native/issues/17



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org