> On Sep 1, 2016, at 3:18 PM, Allen Wittenauer
> wrote:
>
>
>> On Sep 1, 2016, at 2:57 PM, Andrew Wang wrote:
>>
>> Steve requested a git hash for this release. This led us into a brief
>> discussion of our use of git tags, wherein we realized that al
I’d like to call for a vote to run for 5 days (ending Mon 12, 2016 at
7AM PT) to merge the HADOOP-13341 feature branch into trunk. This branch was
developed exclusively by me. As usual with large shell script changes, it's
been broken up into several smaller commits to make it easier
> On Sep 8, 2016, at 2:50 AM, Steve Loughran wrote:
>
> I'm trying to do the review effort here even though I don't know detailed
> bash, as I expect I don't know any less than others, and what better way to
> learn than reviewing code written by people that do know bash?
Just a head
> On Sep 9, 2016, at 2:15 PM, Anu Engineer wrote:
>
> +1, Thanks for the effort. It brings in a world of consistency to the hadoop
> vars; and as usual reading your bash code was very educative.
Thanks!
There's still a handful of HDFS and MAPRED vars that begin with HADOOP,
b
The vote passes with 3 +1 binding votes.
I'll be merging this later today.
Thanks everyone!
> On Sep 7, 2016, at 6:44 AM, Allen Wittenauer
> wrote:
>
>
> I’d like to call for a vote to run for 5 days (ending Mon 12, 2016 at
> 7AM PT) to merge the HADOOP-13
> On Sep 13, 2016, at 7:31 AM, Apache Jenkins Server
> wrote:
>
> For more details, see
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/163/
>
> unit:
>
>
>
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/163/artifact/out/patch-unit-hadoop-mapreduc
> On Sep 24, 2016, at 4:24 AM, Steve Loughran wrote:
>
>
> On 23 Sep 2016, at 18:55, Andrew Wang
> mailto:andrew.w...@cloudera.com>> wrote:
>
> Have you git blamed to dig up the original JIRA conversation? I think that
> deprecation predates many of us, so you might not get much historical
> On Sep 24, 2016, at 5:11 AM, Allen Wittenauer
> wrote:
>
>
>> On Sep 24, 2016, at 4:24 AM, Steve Loughran wrote:
>>
>>
>> On 23 Sep 2016, at 18:55, Andrew Wang
>> mailto:andrew.w...@cloudera.com>> wrote:
>>
>> Have you git b
> On Sep 27, 2016, at 10:06 AM, Steve Loughran wrote:
>
> Things aren't laying out very well in the site now the new logo is in.
>
> https://hadoop.apache.org/docs/stable2/hadoop-project-dist/hadoop-common/FileSystemShell.html
>
> Is there anyone who understands CSS well enough to help?
> On Sep 28, 2016, at 9:56 AM, Sean Mackrory wrote:
> and
> Bigtop's integration tests too. Anything else you would add to this?
Be aware that bigtop does things to the jar layout that will break parts of
hadoop 3.
-
To unsubs
> On Oct 5, 2016, at 10:35 PM, Akira Ajisaka wrote:
> Can we rename it?
>
> AFAIK, hadoop releases were built by hortonmu in 2014 and was renamed to
> jenkins.
That's not how that works.
It's literally storing the id of the person who built the classes. It
wasn't 'renamed' t
> On Oct 6, 2016, at 1:39 PM, Akira Ajisaka wrote:
>
> > It wasn't 'renamed' to jenkins, prior releases were actually built by and
> > on the Jenkins infrastructure. Which was a very very bad idea: it's
> > insecure and pretty much against ASF policy.
>
> Sorry for the confusion. I should no
> On Oct 27, 2016, at 8:20 PM, Brahma Reddy Battula
> wrote:
>
> As we supporting the Hadoop in windows, I feel, we should have pre-commit
> build in windows( atleast in qbt).
I actually tried to get Apache Yetus testing Apache Hadoop on the
hadoop-win box last year. (This was befo
> On Nov 1, 2016, at 4:00 AM, Brahma Reddy Battula
> wrote:
> Thanks for information. Seems to be challenge to get it done. Can we try to
> get dedicated machine for this..?
We have one. What we don't have is someone dedicated enough to keep it
running. It is not a "one and done".
> On Nov 7, 2016, at 11:29 AM, Ravi Prakash wrote:
>
> I have a preference for d) Contributed by XXX.
>
> Wouldn't signed-off require the commit to come from the contributor? What
> about people who submit patch files?
If the patches are built with 'git format-patch', no.
In
> On Jan 18, 2017, at 11:21 AM, Chris Trezzo wrote:
>
> Thanks Sangjin for pushing this forward! I have a few questions:
These are great questions, because I know I'm not seeing a whole lot of
substance in this vote. The way to EOL software in the open source universe is
with new rel
If you ran mvn clean at any point in your repo between create-release and mvn
deploy, you'll need to start at running create-release again. create-release
leaves things in a state that mvn deploy should be ready to go, with no clean
necessary.
> On Jan 20, 2017, at 11:12 AM, Junping Du wrote
> On Jan 20, 2017, at 2:36 PM, Andrew Wang wrote:
>
> http://home.apache.org/~wang/3.0.0-alpha2-RC0/
There are quite a few JIRA issues that need release notes.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoo
> On Jan 22, 2017, at 9:05 PM, Allen Wittenauer
> wrote:
>
>
>
>
>
>> On Jan 20, 2017, at 2:36 PM, Andrew Wang wrote:
>>
>> http://home.apache.org/~wang/3.0.0-alpha2-RC0/
>
> There are quite a few JIRA issues that need release notes.
> On Jan 21, 2017, at 7:08 PM, Karthik Kambatla wrote:
>
> 3. RM: some method to madness. Junping, for instance, is trying to roll
> a release with 2300 patches. It is a huge time investment. (Thanks again,
> Junping.) Smaller releases are easier to manage. A target release cadence,
> co
> On Jan 23, 2017, at 8:50 PM, Chris Douglas wrote:
>
> Thanks for all your work on this, Andrew. It's great to see the 3.x
> series moving forward.
>
> If you were willing to modify the release notes and add the LICENSE to
> the jar, we don't need to reset the clock on the VOTE, IMO.
FWIW, I
> On Mar 6, 2017, at 11:27 AM, Andrew Wang wrote:
> Looks like H9 is having problems cleaning the workspace, leading to a lot
> of silent precommit failures. I filed this INFRA JIRA:
> https://issues.apache.org/jira/browse/INFRA-13618
Have we tried writing a job that nukes the workspace on that
> On Mar 6, 2017, at 1:17 PM, Andrew Wang wrote:
>
> Do you have a link to your old job somewhere?
Nope, but it’s trivial to write. single job that only runs on H9 that
removes that other job’s workspace dir. You can also try using the “Wipe out
current workspace” button.
> I'm als
> On Mar 6, 2017, at 1:57 PM, Andrew Wang wrote:
>
> I'll leave it there so it's ready for next time. If this keeps happening on
> H9, then I'm going to ask infra to reimage it. FWIW I haven't seen this on
> our internal unit test runs, so it points to an H9-specific issue.
I’ve seen
> On Mar 7, 2017, at 2:51 PM, Andrew Wang wrote:
> I think it'd be nice to
> have a nightly Jenkins job that builds an RC,
Just a reminder that any such build cannot be used for an actual
release:
http://www.apache.org/legal/release-policy.html#owned-controlled-hardware
> On Mar 8, 2017, at 9:34 AM, Sean Busbey wrote:
>
> Is this HADOOP-13951?
Almost certainly. Here's the run that broke it again:
https://builds.apache.org/job/PreCommit-HDFS-Build/18591
Likely something in the HDFS-7240 branch or with this patch that's
doing Bad Things (tm).
> On Mar 8, 2017, at 12:04 PM, Allen Wittenauer
> wrote:
>
>
>> On Mar 8, 2017, at 9:34 AM, Sean Busbey wrote:
>>
>> Is this HADOOP-13951?
>
> Almost certainly. Here's the run that broke it again:
>
> https://builds.apache.org/j
> On Mar 8, 2017, at 10:55 AM, Marton Elek wrote:
>
> I think the main point here is the testing of the release script, not the
> creation of the official release.
… except the Hadoop PMC was doing exactly this from 2.3.0 up until
recently. Which means we have a few years worth of rel
> On Mar 8, 2017, at 2:21 PM, Anu Engineer wrote:
>
> Hi Allen,
>> Likely something in the HDFS-7240 branch or with this patch that's
>> doing Bad Things (tm).
>
> Thanks for bringing this to my attention, But I am surprised that a mvn
> command is able to kill a test machine.
F
> On Mar 8, 2017, at 1:54 PM, Allen Wittenauer
> wrote:
>
> This is already possible:
> * don’t use —asfrelease
> * use —sign, —native, and, if appropriate for your platform,
> —docker and —dockercache
Oh yeah, I forg
> On Mar 8, 2017, at 2:53 PM, Anu Engineer wrote:
>
> Agreed, but I was under the impression that we would kill the container under
> OOM conditions and not the whole base machine.
We do not run our docker containers under a cgroup.
--
> On Mar 9, 2017, at 2:15 PM, Andrew Wang wrote:
>
> H9 is again eating our builds.
>
H0: https://builds.apache.org/job/PreCommit-HDFS-Build/18652/console
H6: https://builds.apache.org/job/PreCommit-HDFS-Build/18646/console
> On Mar 21, 2017, at 10:12 AM, Andrew Wang wrote:
>
> I poked around a bit. The 3.0.0-alpha2 binary tarball is only 246M and has
> more changes than 2.8.0.
Not to disclaim any other potential issues, but it's worth noting 3.x de-dupes
jar files as part of the packaging process. So it's not
Just a heads up. Looks like some removed the Finish Date off of 2.8.0 in JIRA.
It needs to be put back to match what is in the artifacts that we voted on.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For a
Hey gang.
Could I get a quick review of HADOOP-14202? This changes a few things:
* Makes the rest of the _USER vars consistent with the other changes in
trunk (e.g., HADOOP_SECURE_DN_USER becomes HDFS_DATANODE_SECURE_USER)
* deprecation warnings as necessary
*
> On Mar 28, 2017, at 5:09 PM, Chris Douglas wrote:
>
> I haven't seen data identifying PB as a bottleneck, but the
> non-x86/non-Linux and dev setup arguments may make this worthwhile. -C
FWIW, we have the same problem with leveldbjni-all. (See the ASF
PowerPC build logs) I keep meani
This morning I had a bit of a shower thought:
With the new shaded hadoop client in 3.0, is there any reason the
default classpath should remain the full blown jar list? e.g., shouldn’t
‘hadoop classpath’ just return configuration, user supplied bits (e.g.,
HADOOP_USER_CLASSPAT
;s the current contract for `hadoop classpath`? Would it be safer to
> introduce `hadoop userclasspath` or similar for this behavior?
>
> I'm betting that changing `hadoop classpath` will lead to some breakages,
> so I'd prefer to make this new behavior opt-in.
>
> Best,
> On Apr 13, 2017, at 11:13 PM, Arun Suresh wrote:
>
> Yup,
>
> YARN Pre-Commit tests are having the same problem as well.
> Is there anything that can be done to fix this ? Ping Yetus folks (Allen /
> Sean)
https://issues.apache.org/jira/browse/HADOOP-14311
--
Looks like someone reset HEAD back to Mar 31.
Sent from my iPad
> On Apr 16, 2017, at 12:08 AM, Apache Jenkins Server
> wrote:
>
> For more details, see
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/378/
>
>
>
>
>
> -1 overall
>
>
> The following subsystems voted -1
Hey gang.
HADOOP-14316 enables the spotbugs back-end for the findbugs front-end.
Spotbugs (https://spotbugs.github.io/) is the fork of findbugs that the
community and some of the major contributors have made to move findbugs
forward. It is geared towards JDK8 and JDK9.
Befor
> On Apr 19, 2017, at 10:52 AM, Wei-Chiu Chuang wrote:
> That sounds scary. Would you mind to share the list of bugs that spotbugs
> found? Sounds like some of them may warrant new blockers jiras for Hadoop 3.
I've added the list to the JIRA.
---
> On Apr 25, 2017, at 12:35 AM, Akira Ajisaka wrote:
> > Maybe we should create a jira to track this?
>
> I think now either way (reopen or create) is fine.
>
> Release doc maker creates change logs by fetching information from JIRA, so
> reopening the tickets should be avoided when a release
> On May 1, 2017, at 2:27 PM, Andrew Wang wrote:
> I believe I asked about this on dev-yetus a while back. I'd prefer that the
> presence of the fix version be sufficient to indicate whether a JIRA is
> included in a release branch. Yetus requires that the JIRA be resolved as
> "Fixed" to show
Is there any reason to not Close -alpha1+resolved state JIRAs? It's been quite
a while and those definitely should not getting re-opened anymore. What about
-alpha2's that are also resolved?
-
To unsubscribe, e-mail: common-de
This is just a heads up.
The Apache Yetus community is debating removing the maven eclipse
plug-in testing support from precommit. (Given that Apache Hadoop is currently
rigged up to always run Yetus' master for testing purposes, this means Hadoop
will see the removal i
It looks like HADOOP-13578 added Facebook's zstd compression codec.
Unfortunately, that codec is using the same 3-clause BSD (LICENSE file) +
patent grant license (PATENTS file) that React is using and RocksDB was using.
Should that code get reverted?
---
> On Jul 21, 2017, at 5:46 PM, Konstantin Shvachko wrote:
>
> + d...@yetus.apache.org
>
> Guys, could you please take a look. Seems like Yetus problem with
> pre-commit build for branch-2.7.
branch-2.7 is missing stuff in .gitignore.
---
The "update the maven snapshot repo after a commit then update the JIRA issue
on the build status" job caught it (https://s.apache.org/bXUp) but looks like
it's feedback and the -1 on the mvninstall after it was ignored. *shrugs*
> On Jul 22, 2017, at 3:33 PM, Bokor Andras wrote:
>
> Somethin
> On Jul 22, 2017, at 10:12 PM, Brahma Reddy Battula
> wrote:
>
> FYI..After Jian he comment on YARN-6804,I reverted. Now trunk build will pass.
>
Yup:
https://s.apache.org/S6yd
-
To unsubscribe, e-mail: common-dev-unsubscr.
> On Jul 22, 2017, at 11:31 PM, Brahma Reddy Battula
> wrote:
>
> AFAIK, Patch install(i mean,mvn install) will not run all the projects hence
> you didn't seen this failure.
Correct. By design, precommit only runs the test suite against the maven
modules that the patch modifies to save (si
; >>
> >> Should we add "patchprocess/" to .gitignore, is that the problem for 2.7?
> >>
> >> Thanks,
> >> --Konstantin
> >>
> >> On Fri, Jul 21, 2017 at 6:24 PM, Konstantin Shvachko <
> >> shv.had...@gmail.com>
I've noticed that in almost all of the (official) documentation, it's written
that in order to build on Windows the maven command should be some form of:
mvn package -Pdist,native-win -DskipTests
The one constant is that the native-win profile is always listed. What's
interesting is that nat
For more details, see https://builds.apache.org/job/hadoop-trunk-win/136/
HADOOP-1466. Flexible Visual Studio support
-1 overall
The following subsystems voted -1:
unit
The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit
Specific tests:
> On Jul 31, 2017, at 11:20 AM, Konstantin Shvachko
> wrote:
>
> https://wiki.apache.org/hadoop/HowToReleasePreDSBCR
FYI:
If you are using ASF Jenkins to create an ASF release artifact,
it's pretty much an automatic vote failure as any such release is in violation
of
For more details, see https://builds.apache.org/job/hadoop-trunk-win/141/
[Jul 31, 2017 2:09:13 AM] (aajisaka) YARN-5728.
TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization
[Jul 31, 2017 5:08:30 AM] (aajisaka) HADOOP-14690. RetryInvocationHandler
should override toString().
[Jul 31, 20
> On Jul 31, 2017, at 4:18 PM, Andrew Wang wrote:
>
> Forking this off to not distract from release activities.
>
> I filed https://issues.apache.org/jira/browse/LEGAL-323 to get clarity on the
> matter. I read the entire webpage, and it could be improved one way or the
> other.
IAN
-general/20.mbox/%3C4EB0827C.6040204%40apache.org%3E
>
> Doug Cutting:
> "Folks should not primarily evaluate binaries when voting. The ASF primarily
> produces and publishes source-code
> so voting artifacts should be optimized for evaluation of that."
>
> Thanks,
&g
For more details, see https://builds.apache.org/job/hadoop-trunk-win/146/
[Aug 2, 2017 4:25:19 PM] (yufei) YARN-6895. [FairScheduler] Preemption
reservation may cause regular
-1 overall
The following subsystems voted -1:
unit
The following subsystems are considered long running:
(runti
> On Aug 7, 2017, at 3:53 AM, Akira Ajisaka wrote:
>
>
> I'll ask INFRA to create a git repository if there are no objections.
There's no need to create a git repo. They just need to know to pull
the website from the asf-site branch.
--
> On Aug 8, 2017, at 12:36 AM, Akira Ajisaka wrote:
>
> Now I'm okay with not creating another repo.
> I'm thinking the following procedures may work:
>
> 1. Create ./asf-site directory
> 2. Add the content of https://github.com/elek/hadoop-site-proposal to the
> directory
> 3. Generate web pa
Something else to consider. The main hadoop repo has precommit support. I
could easily see a quick and dirty maven pom.xml and dockerfile put in place to
build the website against “patches” uploaded to JIRA or github.
-
To
It’s probably worth pointing out that as soon as the
native-maven-plugin gets an update, we’ll be able to have a longer path.
Unfortunately, like leveldbjni-all, it’s effectively dead as far as updates;
the fix was committed in 2014 and there hasn’t been an update since.
As a
> On Aug 9, 2017, at 5:04 AM, Ewan Higgs wrote:
> Is Jenkins even building these?
No, Jenkins is not currently running any of the ITs (this one and the
ones in hadoop-aws). Probably worth pointing out that a large chunk of the
unit tests are really misclassified ITs (esp in hadoop-map
For more details, see https://builds.apache.org/job/hadoop-trunk-win/155/
[Aug 8, 2017 10:37:47 PM] (stevel) HADOOP-14715. TestWasbRemoteCallHelper
failing. Contributed by Esfandiar
[Aug 8, 2017 11:33:18 PM] (wheat9) HADOOP-14598. Blacklist Http/HttpsFileSystem
in
[Aug 8, 2017 11:48:29 PM] (subr
> On Aug 14, 2017, at 5:36 AM, Brahma Reddy Battula
> wrote:
>
> How about let this comment on Jira if there is any failure(compile/Test)..?
> so that corresponding Jira reporter/committer can look into it(can
> reduce/avoid pre-commit failures..?).
>
hadoop-commit-trunk does. We c
> On Aug 22, 2017, at 6:00 AM, Steve Loughran wrote:
>
>
> I'm having problems getting the s3 classpath setup on the CLI & am trying to
> work out what I'm doing wrong.
>
>
> without setting things up, you can't expect to talk to blobstores
>
> hadoop fs -ls wasb://something/
> hadoop fs -l
We should avoid turning this into a replay of Apache Hadoop 2.6.0 (and
to a lesser degree, 2.7.0 and 2.8.0) where a bunch of last minute
“experimental” features derail stability for a significantly long period of
time.
-
> On Aug 25, 2017, at 10:00 AM, Steve Loughran wrote:
>
> Catching up on this. Looks like I don't have a hadoop-aws profile, which
> explains a lot, doesn't it.
Yes. This is exactly the type of failure I'd expect.
> How do those profiles get created/copied in?
Maven kludgery.
> On Aug 25, 2017, at 10:36 AM, Andrew Wang wrote:
> Until we need to make incompatible changes, there's no need for
> a Hadoop 4.0 version.
Some questions:
Doesn't this place an undue burden on the contributor with the first
incompatible patch to prove worthiness? What happens if it
> On Aug 25, 2017, at 1:23 PM, Jason Lowe wrote:
>
> Allen Wittenauer wrote:
>
> > Doesn't this place an undue burden on the contributor with the first
> > incompatible patch to prove worthiness? What happens if it is decided that
> > it's not g
> On Aug 28, 2017, at 12:41 PM, Jason Lowe wrote:
>
> I think this gets back to the "if it's worth committing" part.
This brings us back to my original question:
"Doesn't this place an undue burden on the contributor with the first
incompatible patch to prove worthiness? What
Just to close the loop on this a bit ...
Windows always triggers the 'native-win' profile because winutils is
currently required to actually use Apache Hadoop on that platform. On other
platforms, the 'native' profile is optional since their is enough support in
the JDK to at least do
> On Aug 28, 2017, at 9:58 AM, Allen Wittenauer
> wrote:
> The automation only goes so far. At least while investigating Yetus
> bugs, I've seen more than enough blatant and purposeful ignored errors and
> warnings that I'm not convinced it will be effectiv
> On Aug 31, 2017, at 8:33 PM, Jian He wrote:
> I would like to call a vote for merging yarn-native-services to trunk.
1) Did I miss it or is there no actual end-user documentation on how to
use this? I see
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-serv
> On Sep 5, 2017, at 2:53 PM, Jian He wrote:
>
>> Based on the documentation, this doesn’t appear to be a fully function DNS
>> server as an admin would expect (e.g., BIND, Knot, whatever). Where’s
>> forwarding? How do I setup notify? Are secondaries even supported? etc, etc.
>
> It seems l
> On Sep 5, 2017, at 3:12 PM, Gour Saha wrote:
>
> 2) Lots of markdown problems in the NativeServicesDiscovery.md document.
> This includes things like Œyarnsite.xml¹ (missing a dash.)
>
> The md patch uploaded to YARN-5244 had some special chars. I fixed those
> in YARN-7161.
It’s a
> On Sep 6, 2017, at 7:20 AM, Steve Loughran wrote:
>
>
> Every morning my laptop downloads the doxia 1.8 snapshot for its build
>
>
….
> This implies that the build isn't reproducible, which isn't that bad for a
> short-lived dev branch, but not what we want for any releases
Thi
> On Sep 6, 2017, at 9:53 AM, Steve Loughran wrote:
>
> Well, it turns out not to like depth-4 MD tags, of the form : DOXIA-533
> , though that looks like a long-standing issue, not a regression
Yup.
> workaround: don't use level4 titles. And do check locally before bothering to
> On Sep 5, 2017, at 6:23 PM, Jian He wrote:
>
>> If it doesn’t have all the bells and whistles, then it shouldn’t be on
>> port 53 by default.
> Sure, I’ll change the default port to not use 53 and document it.
>> *how* is it getting launched on a privileged port? It sounds like the
> Begin forwarded message:
>
> From: "Rory O'Donnell"
> Subject: Moving Java Forward Faster
> Date: September 7, 2017 at 2:12:45 AM PDT
> To: "strub...@yahoo.de >> Mark Struberg"
> Cc: rory.odonn...@oracle.com, abdul.kolarku...@oracle.com,
> balchandra.vai...@oracle.com, dalibor.to...@oracle.
I’m a little hesitant to share this because it’s really Not Quite Ready
for primetime, but I figured others might want to play with it early anyway.
https://builds.apache.org/view/H-L/view/Hadoop/job/Precommit-hadoop-win/
Will let you test patches on Wi
> On Sep 8, 2017, at 9:25 AM, Jian He wrote:
>
> Hi Allen,
> The documentations are committed. Please check QuickStart.md and others in
> the same folder.
> YarnCommands.md doc is updated to include new commands.
> DNS default port is also documented.
> Would you like to give a look and see if
> On Sep 14, 2017, at 8:03 AM, Sean Busbey wrote:
>
> * HADOOP-14654 updated commons-httplient to a new patch release in
> hadoop-project
> * Precommit checked the modules that changed (i.e. not many)
> * nightly had Azure support break due to a change in behavior.
OK, so it worked as c
> On Sep 14, 2017, at 11:01 AM, Sean Busbey wrote:
>
>> Committers MUST check the qbt output after a commit. They MUST make sure
> their commit didn’t break something new.
>
> How do we make this easier / more likely to happen?
>
> For example, I don't see any notice on HADOOP-14654 that the
> On Sep 19, 2017, at 6:35 AM, Brahma Reddy Battula
> wrote:
>
> qbt is failing from two days with following errors, any idea on this..?
Nothing to be too concerned about.
This is what it looks like when a build server gets bounced or crashed.
INFRA team knows our jobs take
> On Sep 19, 2017, at 6:48 AM, Brahma Reddy Battula
> wrote:
>
> Can we run "mvn install" and "compile" for all the modules after applying the
> patch(we can skip shadeclients)
We need to get over this idea that precommit is going to find all
problems every time. Committers actually
> On Oct 6, 2017, at 1:31 PM, Andrew Wang wrote:
>
> - Still waiting on Allen to review YARN native services feature.
Fake news.
I’m still -1 on it, at least prior to a patch that posted late
yesterday. I’ll probably have a chance to play with it early next week.
Key pro
> On Oct 6, 2017, at 5:51 PM, Eric Yang wrote:
> yarn application -deploy –f spec.json
> yarn application -stop
> yarn application -restart
> yarn application -remove
>
> and
>
> yarn application –list will display both application list from RM as well as
> docker services?
To whoever set this up:
There was a job config problem where the Jenkins branch parameter wasn’t passed
to Yetus. Therefore both of these reports have been against trunk. I’ve fixed
this job (as well as the other jobs) to honor that parameter. I’ve kicked off
a new run with these changes.
I’m really confused why this causes the Yahoo! QA boxes to go catatonic (!?!)
during the run. As in, never come back online, probably in a kernel panic.
It’s pretty consistently in hadoop-hdfs, so something is going wrong there… is
branch-2 hdfs behaving badly? Someone needs to run the hadoop
.
>
> Thanks,
> Subru
>
> On Mon, Oct 23, 2017 at 11:26 AM, Vrushali C wrote:
> Hi Allen,
>
> I have filed https://issues.apache.org/jira/browse/YARN-7380 for the
> timeline service findbugs warnings.
>
> thanks
> Vrushali
>
>
> On Mon, Oct 23, 2017
> On Oct 23, 2017, at 12:50 PM, Allen Wittenauer
> wrote:
>
>
>
> With no other information or access to go on, my current hunch is that one of
> the HDFS unit tests is ballooning in memory size. The easiest way to kill a
> Linux machine is to eat all of the RAM,
t tell which daemons/components of HDFS consumes unexpected high
>> memory. Don't sounds like a solid bug report to me.
>>
>>
>>
>> Thanks,?
>>
>>
>> Junping
>>
>>
>>
>> From: Sean Bus
> On Oct 24, 2017, at 4:10 PM, Andrew Wang wrote:
>
> FWIW we've been running branch-3.0 unit tests successfully internally, though
> we have separate jobs for Common, HDFS, YARN, and MR. The failures here are
> probably a property of running everything in the same JVM, which I've found
> pro
… I’m going to rework the precommit jobs to use the branch of Yetus with the
process and memory limit protections.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-
> On Nov 3, 2017, at 12:08 PM, Stack wrote:
>
> On Sat, Oct 28, 2017 at 2:00 PM, Konstantin Shvachko
> wrote:
>
>> It is an interesting question whether Ozone should be a part of Hadoop.
>
> I don't see a direct answer to this question. Is there one? Pardon me if
> I've not seen it but I'm in
For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/599/
[Nov 19, 2017 8:39:37 PM] (aw) HADOOP-13514. Upgrade maven surefire plugin to
2.20.1
-1 overall
The following subsystems voted -1:
asflicense findbugs unit
The following subsystems voted -1 but
w
The original release script and instructions broke the build up into
three or so steps. When I rewrote it, I kept that same model. It’s probably
time to re-think that. In particular, it should probably be one big step that
even does the maven deploy. There’s really no harm in doing th
> On Nov 21, 2017, at 2:16 PM, Vinod Kumar Vavilapalli
> wrote:
>
>>> - $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start historyserver doesn't even
>>> work. Not just deprecated in favor of timelineserver as was advertised.
>>
>> This works for me in trunk and the bash code doesn’t appear to
201 - 300 of 1325 matches
Mail list logo