Will there be a 2.2 patch releases?

2013-12-29 Thread Raymie Stata
In discussing YARN-1295 it's become clear that I'm confused about the
outcome of the Next releases thread.  I had assumed there would be
patch releases to 2.2, and indeed one would be coming out early Q1.
Is this correct?

If so, then things seem a little messed-up right now in 2.2-land.
There already is a branch-2.2.1, but there hasn't been a release.  And
branch-2.2 has Maven version 2.2.2-SNAPSHOT.  Due to the 2.3 rename
a few weeks ago, it might be that the first patch release for 2.2
needs to be 2.2.2.  But if so, notice these lists of fixes for 2.2.1:

  https://issues.apache.org/jira/browse/YARN/fixforversion/12325667
  https://issues.apache.org/jira/browse/HDFS/fixforversion/12325666

Do these need to have their fix-versions updated?

  Raymie


P.S. While we're on the subject of point releases, let me check my assumptions.

I assumed that, for release x.y.z, fixes deemed to be critical bug
fixes would be put into branch-x.y as a matter of course.  The Maven
release-number in branch-x.y would be x.y.(z+1)-SNAPSHOT, and JIRAs
(to be) committed to branch-x.y would have x.y.(z+1) as one of their
fix-versions.

When enough fixes have accumulated to warrant a release, or when a fix
comes up that is critical enough to warrant an immediate release, then
branch-x-y is branched to branch-x.y.(z+1), and a release is made.

(As Hadoop itself moves from x.y to x.(y+1) and then x.(y+2), the
threshold for what is considered to be a critical bug would
naturally start to rise, as the effort of back-porting goes up.)

Do I have it right?


Build failed in Jenkins: Hadoop-Common-trunk #996

2013-12-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/996/changes

Changes:

[kasha] YARN-1481. Reverting addendum patch

[cdouglas] MAPREDUCE-5196. Add bookkeeping for managing checkpoints of task 
state.
Contributed by Carlo Curino

[vinodkv] MAPREDUCE-5694. Fixed MR AppMaster to shutdown the LogManager so as 
to avoid losing syslog in some conditions. Contributed by Mohammad Kamrul Islam.

--
[...truncated 61076 lines...]
Adding reference: maven.local.repository
[DEBUG] Initialize Maven Ant Tasks
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 from a zip file
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 from a zip file
Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader 
(parentFirst)
 +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask
Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent 
loader (parentFirst)
 +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask
Setting project property: test.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: test.exclude.pattern - _
Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT
Setting project property: test.exclude - _
Setting project property: distMgmtSnapshotsId - apache.snapshots.https
Setting project property: project.build.sourceEncoding - UTF-8
Setting project property: java.security.egd - file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl - 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl - 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: avro.version - 1.7.4
Setting project property: test.build.data - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: commons-daemon.version - 1.0.13
Setting project property: hadoop.common.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/../../hadoop-common-project/hadoop-common/target
Setting project property: testsThreadCount - 4
Setting project property: maven.test.redirectTestOutputToFile - true
Setting project property: jdiff.version - 1.0.9
Setting project property: distMgmtStagingName - Apache Release Distribution 
Repository
Setting project property: project.reporting.outputEncoding - UTF-8
Setting project property: build.platform - Linux-i386-32
Setting project property: protobuf.version - 2.5.0
Setting project property: failIfNoTests - false
Setting project property: protoc.path - ${env.HADOOP_PROTOC_PATH}
Setting project property: jersey.version - 1.9
Setting project property: distMgmtStagingId - apache.staging.https
Setting project property: distMgmtSnapshotsName - Apache Development Snapshot 
Repository
Setting project property: ant.file - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId - org.apache.hadoop
Setting project property: project.artifactId - hadoop-common-project
Setting project property: project.name - Apache Hadoop Common Project
Setting project property: project.description - Apache Hadoop Common Project
Setting project property: project.version - 3.0.0-SNAPSHOT
Setting project property: project.packaging - pom
Setting project property: project.build.directory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target
Setting project property: project.build.outputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/classes
Setting project property: project.build.testOutputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-classes
Setting project property: project.build.sourceDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/src/main/java
Setting project property: project.build.testSourceDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/src/test/java
Setting project property: localRepository -id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none
Setting project property: settings.localRepository - 
/home/jenkins/.m2/repository
Setting project property: 

Re: Possible Length issue

2013-12-29 Thread Dhaivat Pandya
Anyone?


On Sat, Dec 28, 2013 at 1:06 PM, Dhaivat Pandya dhaivatpan...@gmail.comwrote:

 Hi,

 I've been working a lot with the Hadoop NameNode IPC protocol (while
 building a cache layer on top of Hadoop). I've noticed that for request
 packets coming from the default DFS client that do not have a method name,
 the length field is often *completely *off.

 For example, I've been looking at packets with length 1752330339. I
 thought this might have been an issue with my cache layer, so I checked
 with Wireshark, and found packets with such absurd length parameters
 (obviously, the packets themselves weren't actually that long; the length
 field was misrepresented).

 Unfortunately, I haven't had the opportunity to test this issue on other
 machines and setups (the reproducing steps should be running an ls / with
 the default DFS client and sniffing the packets to find the length
 parameter, release 1.2.1).

 Is this normal behavior, a bug or something I'm missing?

 Thank you,

 Dhaivat



Re: Possible Length issue

2013-12-29 Thread Dhaivat Pandya
Actually, we can relegate this as a non-issue; I have found a different
source of error in the system.


On Sun, Dec 29, 2013 at 3:03 PM, Dhaivat Pandya dhaivatpan...@gmail.comwrote:

 Anyone?


 On Sat, Dec 28, 2013 at 1:06 PM, Dhaivat Pandya 
 dhaivatpan...@gmail.comwrote:

 Hi,

 I've been working a lot with the Hadoop NameNode IPC protocol (while
 building a cache layer on top of Hadoop). I've noticed that for request
 packets coming from the default DFS client that do not have a method name,
 the length field is often *completely *off.

 For example, I've been looking at packets with length 1752330339. I
 thought this might have been an issue with my cache layer, so I checked
 with Wireshark, and found packets with such absurd length parameters
 (obviously, the packets themselves weren't actually that long; the length
 field was misrepresented).

 Unfortunately, I haven't had the opportunity to test this issue on other
 machines and setups (the reproducing steps should be running an ls / with
 the default DFS client and sniffing the packets to find the length
 parameter, release 1.2.1).

 Is this normal behavior, a bug or something I'm missing?

 Thank you,

 Dhaivat





[jira] [Created] (HADOOP-10194) CLONE - setnetgrent in native code is not portable

2013-12-29 Thread carltone (JIRA)
carltone created HADOOP-10194:
-

 Summary: CLONE - setnetgrent in native code is not portable
 Key: HADOOP-10194
 URL: https://issues.apache.org/jira/browse/HADOOP-10194
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.22.0, 0.23.0
Reporter: carltone
 Attachments: HADOOP-7147.patch, hadoop-7147.patch

HADOOP-6864 uses the setnetgrent function in a way which is not compatible with 
BSD APIs, where the call returns void rather than int. This prevents the native 
libs from building on OSX, for example.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)