Re: Hadoop native builds fail on ARM due to -m32

2011-05-22 Thread Eli Collins
Hey Trevor,

Thanks for all the info.  I took a quick look at HADOOP-7276 and
HDFS-1920, haven't gotten a chance for a full review yet but they
don't look like they'll be a burden, and if they get Hadoop running on
ARM that's great!

Thanks,
Eli

On Fri, May 20, 2011 at 4:27 PM, Trevor Robinson  wrote:
> Hi Eli,
>
> On Thu, May 19, 2011 at 1:39 PM, Eli Collins  wrote:
>> Thanks for contributing.   Supporting ARM on Hadoop will require a
>> number of different changes right? Eg given that Hadoop currently
>> depends on some Sun-specific classes, and requires a Sun-compatible
>> JVM you'll have to work around this dependency somehow, there's not a
>> Sun JVM for ARM right?
>
> Actually, there is a Sun JVM for ARM, and it works quite well:
>
> http://www.oracle.com/technetwork/java/embedded/downloads/index.html
>
> Currently, it's just a JRE, so you have to use another JDK for javac,
> etc., but I'm optimistic that we'll see a Sun Java SE JDK for ARM
> servers one of these days, given all the ARM server activity from
> Calxeda [http://www.theregister.co.uk/2011/03/14/calxeda_arm_server/],
> Marvell, and nVidia
> [http://www.channelregister.co.uk/2011/01/05/nvidia_arm_pc_server_chip/].
>
> With the patches I submitted, Hadoop builds completely and nearly all
> of the Commons and HDFS unit tests pass with OpenJDK on ARM. (Some of
> the Map/Reduce unit tests have some crashes due to a bug in the
> OpenJDK build I'm using.) I need to re-run the unit tests with the Sun
> JRE and see if they pass; other tests/benchmarks have run much faster
> and more reliably with the Sun JRE, so I anticipate better results.
> I've run tests like TestDFSIO with the Sun JRE and have had no
> problems.
>
>> If there's a handful of additional changes then let's make an umbrella
>> jira for Hadoop ARM support and make the issues you've already filed
>> sub-tasks. You can ping me off-line on how to that if you want.
>> Supporting non-x86 processors and non-gcc compilers is an additional
>> maintenance burden on the project so it would be helpful to have an
>> end-game figured out so these patches don't bitrot in the meantime.
>
> I really don't anticipate any additional changes at this point. No
> Java or C++ code changes have been necessary; it's simply removing
> -m32 from CFLAGS/LDFLAGS and adding ARM to the list of processors in
> apsupport.m4 (which contains lots of other unsupported processors
> anyway). And just to be clear, pretty much everyone uses gcc for
> compilation on ARM, so supporting another compiler is unnecessary for
> this.
>
> I certainly don't want to increase maintenance burden at this point,
> especially given that data center-grade ARM servers are still in the
> prototype stage. OTOH, these changes seem pretty trivial to me, and
> allow other developers (particularly those evaluating ARM and those
> involved in the Ubuntu ARM Server 11.10 release this fall:
> https://blueprints.launchpad.net/ubuntu/+spec/server-o-arm-server) to
> get Hadoop up and running without having to patch the build.
>
> I'll follow up offline though, so I can better understand any concerns
> you may still have.
>
> Thanks,
> Trevor
>
>> On Tue, May 10, 2011 at 5:13 PM, Trevor Robinson  
>> wrote:
>>> Is the native build failing on ARM (where gcc doesn't support -m32) a
>>> known issue, and is there a workaround or fix pending?
>>>
>>> $ ant -Dcompile.native=true
>>> ...
>>>      [exec] make  all-am
>>>      [exec] make[1]: Entering directory
>>> `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
>>>      [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
>>> -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
>>> -I/usr/lib/jvm/java-6-openjdk/include
>>> -I/usr/lib/jvm/java-6-openjdk/include/linux
>>> -I/home/trobinson/dev/hadoop-common/src/native/src
>>> -Isrc/org/apache/hadoop/io/compress/zlib
>>> -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
>>> -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
>>> .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
>>> 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
>>> '/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
>>>      [exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
>>> -I/home/trobinson/dev/hadoop-common/src/native
>>> -I/usr/lib/jvm/java-6-openjdk/include
>>> -I/usr/lib/jvm/java-6-openjdk/include/linux
>>> -I/home/trobinson/dev/hadoop-common/src/native/src
>>> -Isrc/org/apache/hadoop/io/compress/zlib
>>> -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
>>> -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
>>> .deps/ZlibCompressor.Tpo -c
>>> /home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
>>>  -fPIC -DPIC -o .libs/ZlibCompressor.o
>>>      [exec] make[1]: Leaving directory
>>> `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
>>>      [exec] 

Hadoop-Common-trunk-Commit - Build # 614 - Failure

2011-05-22 Thread Apache Jenkins Server
See https://builds.apache.org/hudson/job/Hadoop-Common-trunk-Commit/614/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 2097 lines...]
 [exec] libtool: install: chmod 644 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/system/native/Linux-i386-32/lib/libhadoop.a
 [exec] libtool: install: ranlib 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/system/native/Linux-i386-32/lib/libhadoop.a
 [exec] libtool: install: warning: remember to run `libtool --finish 
/usr/local/lib'

compile-core:

jar:
  [tar] Building tar: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/system/classes/bin.tgz
  [jar] Building jar: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/system/hadoop-common-instrumented-0.23.0-SNAPSHOT.jar
  [jar] Building jar: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/system/hadoop-common-instrumented-0.23.0-SNAPSHOT-sources.jar
  [jar] Updating jar: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/system/hadoop-common-instrumented-0.23.0-SNAPSHOT-sources.jar

set-version:
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/ivy
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/ivy
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/ivy

clean-sign:

sign:

signanddeploy:

simpledeploy:
[artifact:install-provider] Installing provider: 
org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime
[artifact:deploy] Deploying to 
https://repository.apache.org/content/repositories/snapshots
[artifact:deploy] [INFO] Retrieving previous build number from 
apache.snapshots.https
[artifact:deploy] An error has occurred while processing the Maven artifact 
tasks.
[artifact:deploy]  Diagnosis:
[artifact:deploy] 
[artifact:deploy] Error deploying artifact 
'org.apache.hadoop:hadoop-common:jar': Error retrieving previous build number 
for artifact 'org.apache.hadoop:hadoop-common:jar': repository metadata for: 
'snapshot org.apache.hadoop:hadoop-common:0.23.0-SNAPSHOT' could not be 
retrieved from repository: apache.snapshots.https due to an error: Error 
transferring file
[artifact:deploy] Connection timed out

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build.xml:1354:
 Error deploying artifact 'org.apache.hadoop:hadoop-common:jar': Error 
retrieving previous build number for artifact 
'org.apache.hadoop:hadoop-common:jar': repository metadata for: 'snapshot 
org.apache.hadoop:hadoop-common:0.23.0-SNAPSHOT' could not be retrieved from 
repository: apache.snapshots.https due to an error: Error transferring file

Total time: 4 minutes 15 seconds


==
==
STORE: saving artifacts
==
==


mv: cannot stat `build/*.tar.gz': No such file or directory
mv: cannot stat `build/test/findbugs': No such file or directory
mv: cannot stat `build/docs/api': No such file or directory
Build Failed
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording fingerprints
Archiving artifacts
Recording test results
Publishing Javadoc
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.


Re: Hudson pre-commit job broken

2011-05-22 Thread Todd Lipcon
I think this was a config change on the Hudson master. I'll email
builds@ and see what we can do about it.

On Sun, May 22, 2011 at 5:15 PM, Todd Lipcon  wrote:
> Hudson seems to keep on hosing itself in this way. Anyone have any ideas?
>
> On Thu, May 19, 2011 at 4:58 PM, Todd Lipcon  wrote:
>> I got it fixed across all the machines by removing those files and
>> then svn upping.
>>
>> For some reason it looks like one particular checkout was done with
>> internal svn version "10" and then the svn binary went back to version
>> 9.
>>
>> -Todd
>>
>> On Thu, May 19, 2011 at 4:55 PM, Nigel Daley  wrote:
>>> maybe it's time to update the slave.jar jenkins jar.  I'll do that.
>>>
>>> Nige
>>>
>>> On May 19, 2011, at 4:43 PM, Todd Lipcon wrote:
>>>
 Strange... looks like the same issue is happening on the other build
 boxes too - I'd fixed h9 but h6 also has the issue.

 On Thu, May 19, 2011 at 4:40 PM, Todd Lipcon  wrote:
> Must have been some svn bug. I rm -Rfed the http/lib directory in
> question, ran svn cleanup, svn up, and it seems OK now.
>
> -Todd
>
> On Thu, May 19, 2011 at 4:14 PM, Aaron T. Myers  wrote:
>> See this page:
>> https://hudson.apache.org/hudson/job/PreCommit-Hadoop-Build/480/console
>>
>> And
>> note the following error:
>>
>>
>>     [exec] svn: This client is too old to work with working copy
>> 'src/test/core/org/apache/hadoop/http/lib'.  You need
>>     [exec] to get a newer Subversion client, or to downgrade this 
>> working copy.
>>     [exec] See 
>> http://subversion.tigris.org/faq.html#working-copy-format-change
>>     [exec] for details.
>>
>>
>> --
>> Aaron T. Myers
>> Software Engineer, Cloudera
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>



 --
 Todd Lipcon
 Software Engineer, Cloudera
>>>
>>>
>>
>>
>>
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>



-- 
Todd Lipcon
Software Engineer, Cloudera


Re: Hudson pre-commit job broken

2011-05-22 Thread Todd Lipcon
Hudson seems to keep on hosing itself in this way. Anyone have any ideas?

On Thu, May 19, 2011 at 4:58 PM, Todd Lipcon  wrote:
> I got it fixed across all the machines by removing those files and
> then svn upping.
>
> For some reason it looks like one particular checkout was done with
> internal svn version "10" and then the svn binary went back to version
> 9.
>
> -Todd
>
> On Thu, May 19, 2011 at 4:55 PM, Nigel Daley  wrote:
>> maybe it's time to update the slave.jar jenkins jar.  I'll do that.
>>
>> Nige
>>
>> On May 19, 2011, at 4:43 PM, Todd Lipcon wrote:
>>
>>> Strange... looks like the same issue is happening on the other build
>>> boxes too - I'd fixed h9 but h6 also has the issue.
>>>
>>> On Thu, May 19, 2011 at 4:40 PM, Todd Lipcon  wrote:
 Must have been some svn bug. I rm -Rfed the http/lib directory in
 question, ran svn cleanup, svn up, and it seems OK now.

 -Todd

 On Thu, May 19, 2011 at 4:14 PM, Aaron T. Myers  wrote:
> See this page:
> https://hudson.apache.org/hudson/job/PreCommit-Hadoop-Build/480/console
>
> And
> note the following error:
>
>
>     [exec] svn: This client is too old to work with working copy
> 'src/test/core/org/apache/hadoop/http/lib'.  You need
>     [exec] to get a newer Subversion client, or to downgrade this working 
> copy.
>     [exec] See 
> http://subversion.tigris.org/faq.html#working-copy-format-change
>     [exec] for details.
>
>
> --
> Aaron T. Myers
> Software Engineer, Cloudera
>



 --
 Todd Lipcon
 Software Engineer, Cloudera

>>>
>>>
>>>
>>> --
>>> Todd Lipcon
>>> Software Engineer, Cloudera
>>
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>



-- 
Todd Lipcon
Software Engineer, Cloudera


[jira] [Created] (HADOOP-7319) Coarse-grained dynamic configuration changes

2011-05-22 Thread Ted Yu (JIRA)
Coarse-grained dynamic configuration changes


 Key: HADOOP-7319
 URL: https://issues.apache.org/jira/browse/HADOOP-7319
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Reporter: Ted Yu


HADOOP-7001 introduced mechanism for performing dynamic configuration changes 
where reconfigureProperty()/reconfigurePropertyImpl() only notifies single 
property change.

Normally, components which use ReconfigurableBase would involve several related 
properties whose update should be done atomically.

This JIRA provides coarse-grained dynamic configuration changes with the 
following benefits:
1. consistency updating related properties dynamically
2. reduction of lock contention when multiple properties are changed in 
proximity

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira