Re: [VOTE] Force "squash and merge" option for PR merge on github UI
sounds good! +1 Regards, Da > On Jul 17, 2019, at 7:32 PM, Dinesh Chitlangia > wrote: > > +1, this is certainly useful. > > Thank you, > Dinesh > > > > >> On Wed, Jul 17, 2019 at 10:04 PM Akira Ajisaka wrote: >> >> Makes sense, +1 >> >>> On Thu, Jul 18, 2019 at 10:01 AM Sangjin Lee wrote: >>> >>> +1. Sounds good to me. >>> On Wed, Jul 17, 2019 at 10:20 AM Iñigo Goiri wrote: +1 On Wed, Jul 17, 2019 at 4:17 AM Steve Loughran >> wrote: > +1 for squash and merge, with whoever does the merge adding the full commit > message for the logs, with JIRA, contributor(s) etc > > One limit of the github process is that the author of the commit >> becomes > whoever hit the squash button, not whoever did the code, so it loses >> the > credit they are due. This is why I'm doing local merges (With some >> help > from smart-apply-patch). I think I'll have to explore >> smart-apply-patch to > see if I can do even more with it > > > > > On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton >> wrote: > >> Hi, >> >> Github UI (ui!) helps to merge Pull Requests to the proposed >> branch. >> There are three different ways to do it [1]: >> >> 1. Keep all the different commits from the PR branch and create one >> additional merge commit ("Create a merge commit") >> >> 2. Squash all the commits and commit the change as one patch >> ("Squash >> and merge") >> >> 3. Keep all the different commits from the PR branch but rebase, >> merge >> commit will be missing ("Rebase and merge") >> >> >> >> As only the option 2 is compatible with the existing development >> practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a >> lazy >> consensus vote: If no objections withing 3 days, I will ask INFRA >> to >> disable the options 1 and 3 to make the process less error prone. >> >> Please let me know, what do you think, >> >> Thanks a lot >> Marton >> >> ps: Personally I prefer to merge from local as it enables to sign >> the >> commits and do a final build before push. But this is a different story, >> this proposal is only about removing the options which are >> obviously >> risky... >> >> ps2: You can always do any kind of merge / commits from CLI, for example >> to merge a feature branch together with keeping the history. >> >> [1]: >> >> > >> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github >> >> >> - >> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org >> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org >> >> > >> >> - >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org >> >> - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
Re: [VOTE] Force "squash and merge" option for PR merge on github UI
+1, this is certainly useful. Thank you, Dinesh On Wed, Jul 17, 2019 at 10:04 PM Akira Ajisaka wrote: > Makes sense, +1 > > On Thu, Jul 18, 2019 at 10:01 AM Sangjin Lee wrote: > > > > +1. Sounds good to me. > > > > On Wed, Jul 17, 2019 at 10:20 AM Iñigo Goiri wrote: > > > > > +1 > > > > > > On Wed, Jul 17, 2019 at 4:17 AM Steve Loughran > > > > > > > wrote: > > > > > > > +1 for squash and merge, with whoever does the merge adding the full > > > commit > > > > message for the logs, with JIRA, contributor(s) etc > > > > > > > > One limit of the github process is that the author of the commit > becomes > > > > whoever hit the squash button, not whoever did the code, so it loses > the > > > > credit they are due. This is why I'm doing local merges (With some > help > > > > from smart-apply-patch). I think I'll have to explore > smart-apply-patch > > > to > > > > see if I can do even more with it > > > > > > > > > > > > > > > > > > > > On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton > wrote: > > > > > > > > > Hi, > > > > > > > > > > Github UI (ui!) helps to merge Pull Requests to the proposed > branch. > > > > > There are three different ways to do it [1]: > > > > > > > > > > 1. Keep all the different commits from the PR branch and create one > > > > > additional merge commit ("Create a merge commit") > > > > > > > > > > 2. Squash all the commits and commit the change as one patch > ("Squash > > > > > and merge") > > > > > > > > > > 3. Keep all the different commits from the PR branch but rebase, > merge > > > > > commit will be missing ("Rebase and merge") > > > > > > > > > > > > > > > > > > > > As only the option 2 is compatible with the existing development > > > > > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a > lazy > > > > > consensus vote: If no objections withing 3 days, I will ask INFRA > to > > > > > disable the options 1 and 3 to make the process less error prone. > > > > > > > > > > Please let me know, what do you think, > > > > > > > > > > Thanks a lot > > > > > Marton > > > > > > > > > > ps: Personally I prefer to merge from local as it enables to sign > the > > > > > commits and do a final build before push. But this is a different > > > story, > > > > > this proposal is only about removing the options which are > obviously > > > > > risky... > > > > > > > > > > ps2: You can always do any kind of merge / commits from CLI, for > > > example > > > > > to merge a feature branch together with keeping the history. > > > > > > > > > > [1]: > > > > > > > > > > > > > > > > > > https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github > > > > > > > > > > > - > > > > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > > > > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > > > > > > > > > > > > > > > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >
Re: [VOTE] Force "squash and merge" option for PR merge on github UI
Makes sense, +1 On Thu, Jul 18, 2019 at 10:01 AM Sangjin Lee wrote: > > +1. Sounds good to me. > > On Wed, Jul 17, 2019 at 10:20 AM Iñigo Goiri wrote: > > > +1 > > > > On Wed, Jul 17, 2019 at 4:17 AM Steve Loughran > > > > wrote: > > > > > +1 for squash and merge, with whoever does the merge adding the full > > commit > > > message for the logs, with JIRA, contributor(s) etc > > > > > > One limit of the github process is that the author of the commit becomes > > > whoever hit the squash button, not whoever did the code, so it loses the > > > credit they are due. This is why I'm doing local merges (With some help > > > from smart-apply-patch). I think I'll have to explore smart-apply-patch > > to > > > see if I can do even more with it > > > > > > > > > > > > > > > On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton wrote: > > > > > > > Hi, > > > > > > > > Github UI (ui!) helps to merge Pull Requests to the proposed branch. > > > > There are three different ways to do it [1]: > > > > > > > > 1. Keep all the different commits from the PR branch and create one > > > > additional merge commit ("Create a merge commit") > > > > > > > > 2. Squash all the commits and commit the change as one patch ("Squash > > > > and merge") > > > > > > > > 3. Keep all the different commits from the PR branch but rebase, merge > > > > commit will be missing ("Rebase and merge") > > > > > > > > > > > > > > > > As only the option 2 is compatible with the existing development > > > > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy > > > > consensus vote: If no objections withing 3 days, I will ask INFRA to > > > > disable the options 1 and 3 to make the process less error prone. > > > > > > > > Please let me know, what do you think, > > > > > > > > Thanks a lot > > > > Marton > > > > > > > > ps: Personally I prefer to merge from local as it enables to sign the > > > > commits and do a final build before push. But this is a different > > story, > > > > this proposal is only about removing the options which are obviously > > > > risky... > > > > > > > > ps2: You can always do any kind of merge / commits from CLI, for > > example > > > > to merge a feature branch together with keeping the history. > > > > > > > > [1]: > > > > > > > > > > > > > https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github > > > > > > > > - > > > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > > > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > > > > > > > > > > > > - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
Re: [VOTE] Force "squash and merge" option for PR merge on github UI
+1. Sounds good to me. On Wed, Jul 17, 2019 at 10:20 AM Iñigo Goiri wrote: > +1 > > On Wed, Jul 17, 2019 at 4:17 AM Steve Loughran > > wrote: > > > +1 for squash and merge, with whoever does the merge adding the full > commit > > message for the logs, with JIRA, contributor(s) etc > > > > One limit of the github process is that the author of the commit becomes > > whoever hit the squash button, not whoever did the code, so it loses the > > credit they are due. This is why I'm doing local merges (With some help > > from smart-apply-patch). I think I'll have to explore smart-apply-patch > to > > see if I can do even more with it > > > > > > > > > > On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton wrote: > > > > > Hi, > > > > > > Github UI (ui!) helps to merge Pull Requests to the proposed branch. > > > There are three different ways to do it [1]: > > > > > > 1. Keep all the different commits from the PR branch and create one > > > additional merge commit ("Create a merge commit") > > > > > > 2. Squash all the commits and commit the change as one patch ("Squash > > > and merge") > > > > > > 3. Keep all the different commits from the PR branch but rebase, merge > > > commit will be missing ("Rebase and merge") > > > > > > > > > > > > As only the option 2 is compatible with the existing development > > > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy > > > consensus vote: If no objections withing 3 days, I will ask INFRA to > > > disable the options 1 and 3 to make the process less error prone. > > > > > > Please let me know, what do you think, > > > > > > Thanks a lot > > > Marton > > > > > > ps: Personally I prefer to merge from local as it enables to sign the > > > commits and do a final build before push. But this is a different > story, > > > this proposal is only about removing the options which are obviously > > > risky... > > > > > > ps2: You can always do any kind of merge / commits from CLI, for > example > > > to merge a feature branch together with keeping the history. > > > > > > [1]: > > > > > > > > > https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github > > > > > > - > > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > > > > > > > >
Re: [VOTE] Force "squash and merge" option for PR merge on github UI
+1 On Wed, Jul 17, 2019 at 4:17 AM Steve Loughran wrote: > +1 for squash and merge, with whoever does the merge adding the full commit > message for the logs, with JIRA, contributor(s) etc > > One limit of the github process is that the author of the commit becomes > whoever hit the squash button, not whoever did the code, so it loses the > credit they are due. This is why I'm doing local merges (With some help > from smart-apply-patch). I think I'll have to explore smart-apply-patch to > see if I can do even more with it > > > > > On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton wrote: > > > Hi, > > > > Github UI (ui!) helps to merge Pull Requests to the proposed branch. > > There are three different ways to do it [1]: > > > > 1. Keep all the different commits from the PR branch and create one > > additional merge commit ("Create a merge commit") > > > > 2. Squash all the commits and commit the change as one patch ("Squash > > and merge") > > > > 3. Keep all the different commits from the PR branch but rebase, merge > > commit will be missing ("Rebase and merge") > > > > > > > > As only the option 2 is compatible with the existing development > > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy > > consensus vote: If no objections withing 3 days, I will ask INFRA to > > disable the options 1 and 3 to make the process less error prone. > > > > Please let me know, what do you think, > > > > Thanks a lot > > Marton > > > > ps: Personally I prefer to merge from local as it enables to sign the > > commits and do a final build before push. But this is a different story, > > this proposal is only about removing the options which are obviously > > risky... > > > > ps2: You can always do any kind of merge / commits from CLI, for example > > to merge a feature branch together with keeping the history. > > > > [1]: > > > > > https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github > > > > - > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > > > >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/ [Jul 16, 2019 2:44:27 AM] (ayushsaxena) HDFS-14642. processMisReplicatedBlocks does not return correct processed [Jul 16, 2019 4:51:59 AM] (github) HDDS-1736. Cleanup 2phase old HA code for Key requests. (#1038) [Jul 16, 2019 8:33:22 AM] (bibinchundatt) YARN-9645. Fix Invalid event FINISHED_CONTAINERS_PULLED_BY_AM at NEW on [Jul 16, 2019 9:36:41 AM] (msingh) HDDS-1756. DeleteContainerCommandHandler fails with NPE. Contributed by [Jul 16, 2019 12:31:13 PM] (shashikant) HDDS-1492. Generated chunk size name too long. Contributed by [Jul 16, 2019 2:52:14 PM] (elek) HDDS-1793. Acceptance test of ozone-topology cluster is failing [Jul 16, 2019 7:52:29 PM] (xyao) HDDS-1787. NPE thrown while trying to find DN closest to client. [Jul 16, 2019 7:58:59 PM] (aengineer) HDDS-1544. Support default Acls for volume, bucket, keys and prefix. [Jul 16, 2019 8:47:51 PM] (github) HDDS-1813. Fix false warning from ozones3 acceptance test. Contributed [Jul 16, 2019 11:59:57 PM] (github) HDDS-1775. Make OM KeyDeletingService compatible with HA model (#1063) [Jul 17, 2019 12:14:23 AM] (github) HADOOP-15729. [s3a] Allow core threads to time out. (#1075) [Jul 17, 2019 12:36:49 AM] (haibochen) YARN-9646. DistributedShell tests failed to bind to a local host name. -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] Failed junit tests : hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup hadoop.hdfs.server.federation.router.TestRouterRpc hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption hadoop.tools.TestDistCpSystem hadoop.ozone.container.ozoneimpl.TestOzoneContainer cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-compile-javac-root.txt [332K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-patch-hadolint.txt [8.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-patch-pylint.txt [212K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/whitespace-eol.txt [9.6M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/whitespace-tabs.txt [1.1M] xml: http
[jira] [Created] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
Adam Antal created MAPREDUCE-7225: - Summary: Fix broken current folder expansion during MR job start Key: MAPREDUCE-7225 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 Project: Hadoop Map/Reduce Issue Type: Bug Components: mrv2 Affects Versions: 3.0.3, 2.9.0 Reporter: Adam Antal Assignee: Adam Antal Starting a sleep job giving "." as files that should be localized is working fine up until 2.9.0, but after that the user is given an IllegalArgumentException. This change is a side-effect of HADOOP-12747 where {{GenericOptionsParser#validateFiles}} function got modified. Can be reproduced by starting a sleep job with "-files ." given as extra parameter. Log: {noformat} sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep -files . -m 1 -r 1 -rt 2000 -mt 2000 WARNING: Use "yarn jar" to launch YARN applications. 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm21 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/hdfs/.staging/job_1563349475208_0017 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/hdfs/.staging/job_1563349475208_0017 java.lang.IllegalArgumentException: Can not create a Path from an empty string at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) at org.apache.hadoop.fs.Path.(Path.java:180) at org.apache.hadoop.fs.Path.(Path.java:125) at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:313) at org.apache.hadoop.util.RunJar.main(RunJar.java:227) {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/ [Jul 16, 2019 2:22:45 PM] (xkrogen) HDFS-14547. Improve memory efficiency of quotas when storage type quotas [Jul 16, 2019 10:50:24 PM] (iwasakims) HADOOP-16386. FindBugs warning in branch-2: GlobalStorageStatistics -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.mapreduce.security.ssl.TestEncryptedShuffle cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/whitespace-tabs.txt [1.2M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [292K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt [12K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nod
Re: [VOTE] Force "squash and merge" option for PR merge on github UI
+1 for squash and merge, with whoever does the merge adding the full commit message for the logs, with JIRA, contributor(s) etc One limit of the github process is that the author of the commit becomes whoever hit the squash button, not whoever did the code, so it loses the credit they are due. This is why I'm doing local merges (With some help from smart-apply-patch). I think I'll have to explore smart-apply-patch to see if I can do even more with it On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton wrote: > Hi, > > Github UI (ui!) helps to merge Pull Requests to the proposed branch. > There are three different ways to do it [1]: > > 1. Keep all the different commits from the PR branch and create one > additional merge commit ("Create a merge commit") > > 2. Squash all the commits and commit the change as one patch ("Squash > and merge") > > 3. Keep all the different commits from the PR branch but rebase, merge > commit will be missing ("Rebase and merge") > > > > As only the option 2 is compatible with the existing development > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy > consensus vote: If no objections withing 3 days, I will ask INFRA to > disable the options 1 and 3 to make the process less error prone. > > Please let me know, what do you think, > > Thanks a lot > Marton > > ps: Personally I prefer to merge from local as it enables to sign the > commits and do a final build before push. But this is a different story, > this proposal is only about removing the options which are obviously > risky... > > ps2: You can always do any kind of merge / commits from CLI, for example > to merge a feature branch together with keeping the history. > > [1]: > > https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github > > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > >
Re: [VOTE] Force "squash and merge" option for PR merge on github UI
+1 Good idea. On Wed, Jul 17, 2019 at 9:37 AM Ayush Saxena wrote: > Thanks Marton, Makes Sense +1 > > > On 17-Jul-2019, at 11:37 AM, Elek, Marton wrote: > > > > Hi, > > > > Github UI (ui!) helps to merge Pull Requests to the proposed branch. > > There are three different ways to do it [1]: > > > > 1. Keep all the different commits from the PR branch and create one > > additional merge commit ("Create a merge commit") > > > > 2. Squash all the commits and commit the change as one patch ("Squash > > and merge") > > > > 3. Keep all the different commits from the PR branch but rebase, merge > > commit will be missing ("Rebase and merge") > > > > > > > > As only the option 2 is compatible with the existing development > > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy > > consensus vote: If no objections withing 3 days, I will ask INFRA to > > disable the options 1 and 3 to make the process less error prone. > > > > Please let me know, what do you think, > > > > Thanks a lot > > Marton > > > > ps: Personally I prefer to merge from local as it enables to sign the > > commits and do a final build before push. But this is a different story, > > this proposal is only about removing the options which are obviously > > risky... > > > > ps2: You can always do any kind of merge / commits from CLI, for example > > to merge a feature branch together with keeping the history. > > > > [1]: > > > https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github > > > > - > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >
Re: [VOTE] Force "squash and merge" option for PR merge on github UI
Thanks Marton, Makes Sense +1 > On 17-Jul-2019, at 11:37 AM, Elek, Marton wrote: > > Hi, > > Github UI (ui!) helps to merge Pull Requests to the proposed branch. > There are three different ways to do it [1]: > > 1. Keep all the different commits from the PR branch and create one > additional merge commit ("Create a merge commit") > > 2. Squash all the commits and commit the change as one patch ("Squash > and merge") > > 3. Keep all the different commits from the PR branch but rebase, merge > commit will be missing ("Rebase and merge") > > > > As only the option 2 is compatible with the existing development > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy > consensus vote: If no objections withing 3 days, I will ask INFRA to > disable the options 1 and 3 to make the process less error prone. > > Please let me know, what do you think, > > Thanks a lot > Marton > > ps: Personally I prefer to merge from local as it enables to sign the > commits and do a final build before push. But this is a different story, > this proposal is only about removing the options which are obviously > risky... > > ps2: You can always do any kind of merge / commits from CLI, for example > to merge a feature branch together with keeping the history. > > [1]: > https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github > > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
Re: Re: Any thoughts making Submarine a separate Apache project?
+1 ,Good idea, we are very much looking forward to it. dashuiguailu...@gmail.com From: Szilard Nemeth Date: 2019-07-17 14:55 To: runlin zhang CC: Xun Liu; Hadoop Common; yarn-dev; Hdfs-dev; mapreduce-dev; submarine-dev Subject: Re: Any thoughts making Submarine a separate Apache project? +1, this is a very great idea. As Hadoop repository has already grown huge and contains many projects, I think in general it's a good idea to separate projects in the early phase. On Wed, Jul 17, 2019, 08:50 runlin zhang wrote: > +1 ,That will be great ! > > > 在 2019年7月10日,下午3:34,Xun Liu 写道: > > > > Hi all, > > > > This is Xun Liu contributing to the Submarine project for deep learning > > workloads running with big data workloads together on Hadoop clusters. > > > > There are a bunch of integrations of Submarine to other projects are > > finished or going on, such as Apache Zeppelin, TonY, Azkaban. The next > step > > of Submarine is going to integrate with more projects like Apache Arrow, > > Redis, MLflow, etc. & be able to handle end-to-end machine learning use > > cases like model serving, notebook management, advanced training > > optimizations (like auto parameter tuning, memory cache optimizations for > > large datasets for training, etc.), and make it run on other platforms > like > > Kubernetes or natively on Cloud. LinkedIn also wants to donate TonY > project > > to Apache so we can put Submarine and TonY together to the same codebase > > (Page #30. > > > https://www.slideshare.net/xkrogen/hadoop-meetup-jan-2019-tony-tensorflow-on-yarn-and-beyond#30 > > ). > > > > This expands the scope of the original Submarine project in exciting new > > ways. Toward that end, would it make sense to create a separate Submarine > > project at Apache? This can make faster adoption of Submarine, and allow > > Submarine to grow to a full-blown machine learning platform. > > > > There will be lots of technical details to work out, but any initial > > thoughts on this? > > > > Best Regards, > > Xun Liu > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >