yes, bumped them up to
export MAVEN_OPTS=-Xmx3072m -XX:MaxPermSize=768m
export ANT_OPTS=$MAVEN_OPTS
also extended test runs times.
On 8 December 2014 at 00:58, Ted Yu yuzhih...@gmail.com wrote:
Looking at the test failures of
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/ which uses
Looks like there was still OutOfMemoryError :
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1964/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameDirAcrossSnapshottableDirs/
FYI
On Mon, Dec 8, 2014 at 2:42 AM, Steve Loughran
On 8 December 2014 at 14:58, Ted Yu yuzhih...@gmail.com wrote:
Looks like there was still OutOfMemoryError :
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1964/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameDirAcrossSnapshottableDirs/
Re g cloud, as far as I can tell what they want is a writeup for existing
services. So we get healthstat running and then write up what we can do in
terms of data flow managment and visualisation then that should cut it.
It would be nice to put the calculator on there as software as a service,
Ps. I made app.py and stuff...
On 8 Dec 2014 02:59, Ted Yu yuzhih...@gmail.com wrote:
Looking at the test failures of
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/ which uses jdk 1.7:
e.g.
On 8 December 2014 at 14:58, Ted Yu yuzhih...@gmail.com wrote:
Looks like there was still OutOfMemoryError :
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1964/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameDirAcrossSnapshottableDirs/
On Mon, Dec 8, 2014 at 7:46 AM, Steve Loughran ste...@hortonworks.com wrote:
On 8 December 2014 at 14:58, Ted Yu yuzhih...@gmail.com wrote:
Looks like there was still OutOfMemoryError :
On 8 December 2014 at 19:58, Colin McCabe cmcc...@alumni.cmu.edu wrote:
It would be nice if we could have a separate .m2 directory per test
executor.
It seems like that would eliminate these race conditions once and for
all, at the cost of storing a few extra jars (proportional to the # of
The latest migration status:
if the jenkins builds are happy then the patch will go in -I do that
monday morning 10:00 UTC
https://builds.apache.org/view/H-L/view/Hadoop/
Getting jenkins to work has been surprisingly difficult...it turns out
that those builds which we thought were java7 or
Looking at the test failures of
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/ which uses jdk 1.7:
e.g.
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameFileAndDeleteSnapshot/
yeah, I'm trying to set some of the common ones up first
it's a bit confusing making sense and isolating JVM updates with other test
failures, especially as some of the failures seem intermittent, and some of
the test runs (hadoop-hfds-trunk) don't even collect all the test results
—you see a
I'm planning to flip the Javac language JVM settings to java 7 this week
https://issues.apache.org/jira/browse/HADOOP-10530
the latest patch also has a profile that sets the language to java8, for
the curious; one bit of code will need patching to compile there.
The plan for the change
Hi Steve,
I think the pre-commit Jenkins are running Java 6, they need to be switched to
Java 7 as well.
Haohui
On Dec 1, 2014, at 5:41 AM, Steve Loughran ste...@hortonworks.com wrote:
I'm planning to flip the Javac language JVM settings to java 7 this week
13 matches
Mail list logo