Why don't the ipc Server use ArrayBlockingQueue for callQueue?

2012-10-31 Thread 罗李
hi everybody:
I have a little question, why don't the ipc Server in hadoop
use ArrayBlockingQueue for the callQueue but use LinkedBlockingQueue? Will
it be more efficiently?

thanks

luoli


Build failed in Jenkins: Hadoop-Common-0.23-Build #418

2012-10-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/418/changes

Changes:

[tgraves] MAPREDUCE-1806. CombineFileInputFormat does not work with paths not 
on default FS (Gera Shegalov via tgraves)

[bobby] svn merge -c 1403745 FIXES: HADOOP-8986. Server$Call object is never 
released after it is sent (bobby)

--
[...truncated 18779 lines...]

[INFO] 
[INFO] --- maven-clover2-plugin:3.0.5:clover (clover) @ hadoop-auth ---
[INFO] Using /default-clover-report descriptor.
[INFO] Using Clover report descriptor: /tmp/mvn6115167138835767062resource
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Clover is enabled with initstring 
'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/hadoop-coverage.db'
[WARNING] Clover historical directory 
[https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/history]
 does not exist, skipping Clover historical report generation 
([https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover])
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Loading coverage database from: 
'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/hadoop-coverage.db'
[INFO] Writing HTML report to 
'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover'
[INFO] Done. Processed 4 packages in 857ms (214ms per package).
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Clover is enabled with initstring 
'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/hadoop-coverage.db'
[WARNING] Clover historical directory 
[https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/history]
 does not exist, skipping Clover historical report generation 
([https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/clover.xml])
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Loading coverage database from: 
'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/hadoop-coverage.db'
[INFO] Writing report to 
'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/clover.xml'
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Auth Examples 0.23.5-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-auth-examples 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-auth-examples ---
[INFO] Wrote classpath file 
'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth-examples/target/classes/mrapp-generated-classpath'.
[INFO] 
[INFO] --- maven-clover2-plugin:3.0.5:setup (setup) @ hadoop-auth-examples ---
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Creating new database at 
'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth-examples/target/clover/hadoop-coverage.db'.
[INFO] Processing files at 1.6 source level.
[INFO] Clover all over. Instrumented 3 files (1 package).
[INFO] Elapsed time = 0.015 secs. (200 files/sec, 18,866.668 srclines/sec)
[INFO] No Clover instrumentation done on source files in: 
[https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth-examples/src/test/java]
 as no matching sources files found
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-auth-examples ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 

[jira] [Created] (HADOOP-8996) Error in Hadoop installation

2012-10-31 Thread Shiva (JIRA)
Shiva created HADOOP-8996:
-

 Summary: Error in Hadoop installation
 Key: HADOOP-8996
 URL: https://issues.apache.org/jira/browse/HADOOP-8996
 Project: Hadoop Common
  Issue Type: Bug
 Environment: fedora 15 
Reporter: Shiva


I am trying to install `Hadoop` on fedora machine by seeing [here][1] 

1. Installed java (and verified whether java exists with `java -version`) and 
it exists
2. I had ssh installed(since it is linux)
3. Downloaded latest version `hadoop 1.0.4` from [here][2] 


  [1]: http://hadoop.apache.org/docs/r0.15.2/quickstart.html
  [2]: http://apache.techartifact.com/mirror/hadoop/common/hadoop-1.0.4/

I have followed the process shown in installation tutorial(link given above) as 
below

$ mkdir input 
$ cp conf/*.xml input 
$ bin/hadoop jar hadoop-examples.1.0.4.jar grep input output 'dfs[a-z.]+' 

Then i had got the following error, which i am unable to understand

sh-4.2$ bin/hadoop jar hadoop-examples-1.0.4.jar grep input output 
'dfs[a-z.]+'
12/10/31 16:14:35 INFO util.NativeCodeLoader: Loaded the native-hadoop 
library
12/10/31 16:14:35 WARN snappy.LoadSnappy: Snappy native library not loaded
12/10/31 16:14:35 INFO mapred.FileInputFormat: Total input paths to process 
: 8
12/10/31 16:14:35 INFO mapred.JobClient: Cleaning up the staging area 
file:/tmp/hadoop-thomas/mapred/staging/shivakrishnab-857393825/.staging/job_local_0001
12/10/31 16:14:35 ERROR security.UserGroupInformation: 
PriviledgedActionException as:thomas cause:java.io.IOException: Not a file: 
file:/home/local/thomas/Hadoop/hadoop-1.0.4/input/conf
java.io.IOException: Not a file: 
file:/home/local/thomas/Hadoop/hadoop-1.0.4/input/conf
at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:989)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:981)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:824)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1261)
at org.apache.hadoop.examples.Grep.run(Grep.java:69)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.Grep.main(Grep.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


Can anyone let me know whats wrong with my machine/or code, what to do to avoid 
this error ?


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


RE: libhdfs on windows

2012-10-31 Thread Peter Marron
Hi Colin,

I have a requirement to be able to run all the Hadoop functionality
that I build from a variety of platforms. This was the original
motivation for wanting to use libhdfs. I followed your
suggestion and looked into using webhdfs and it's looking
promising. Thanks for that. However I also need to be able
to launch Map/Reduce jobs from any platform.
In particular from Windows.  I looked into this by hacking
the bin/hadoop  script to extract the required class path
and various arguments so that I could launch a Map/Reduce job
just by invoking java with the correct arguments.
However I ran into HADOOP-7682.
I can see that there is a workaround here
https://github.com/congainc/patch-hadoop_7682-1.0.x-win
but it suggests that this is not really appropriate for
deployment. I suspect that I can get it to work reliably
by using cygwin and making loads of modifications
but that all seems rather a large effort, error-prone and
difficult to maintain.

Given that I plan to have a relatively small repertoire of Map/Reduce
jobs that I need to launch, I'm tempted to have all the jars pre-packed
on the Name-Node and have the ability to run them there. Then
have a daemon running so that I can use any appropriate ad-hoc RPC
mechanism from Windows to launch them.

Am I missing something? Is there a way to launch Map/Reduce
jobs in a platform neutral way, which runs out of the box, on Windows?

Again, any suggestions welcome.

Peter Marron

 -Original Message-
 From: Peter Marron
 [mailto:Peter.Marron@trilliumsoftware.
 com]
 Sent: 26 October 2012 00:53
 To: common-dev@hadoop.apache.org
 Subject: RE: libhdfs on windows
 
 Hi Colin,
 
 OK, I didn't know there was a hdfs-dev.
 I'm happy to ask there.
 (However there's a lot of mail on
 dev@hadoop and user@hadoop as well
 as user@hive and it's a bit of a
 commitment to track them all.) As for
 webhdfs, I did think about that, and in
 some ways it's a beautiful solution as it
 gives me a platform- and language-
 neutral access mechanism. I was just a
 little worried about the HTTP overhead if
 I am reading a single record at a time.
 Also I will need some way to launch my
 Map/Reduce jobs as well. So I'll probably
 end up using the C++/JNI/Java route to
 do that anyway. Unless there's a better
 way?
 Is there a web Map/Reduce interface?
 
 Many thanks,
 
 Z
 
  -Original Message-
  From: rarecac...@gmail.com
  [mailto:rarecac...@gmail.com] On
  Behalf Of Colin McCabe
  Sent: 25 October 2012 18:24
  To: common-dev@hadoop.apache.org
  Subject: Re: libhdfs on windows
 
  Hi Peter,
 
  This might be a good question for hdfs-
 dev?
 
  As Harsh pointed out below, HDFS-573
  was never committed.  I don't even see
 a patch attached, although
  there is some discussion.
 
  In the mean time, might I suggest using
 the webhdfs interface on
  Windows?
  webhdfs was intended as a stable REST
  interface that can be accessed from
 any platform.
 
  cheers,
  Colin
 
 
  On Thu, Oct 25, 2012 at 7:19 AM, Peter
 Marron
  peter.mar...@trilliumsoftware.com
  wrote:
   Hi,
  
   I've been looking at using libhdfs and
 I
  would like to use it on windows.
   I have found HDFS-573 and the
  information on this page:
  
 
 http://issues.apache.org/jira/browse/HD
  FS-573?page=com.atlassian.jira.
   plugin.system.issuetabpanels:all-
  tabpanel
   which suggests that quite a lot of
 work
  was done on this way back in 2009.
   So is there some source from this
  effort retained somewhere? If so,
  where?
   Or do I have to start from scratch?
  Apologies if this has already been asked
 recently.
  
   Any help appreciated.
  
   Peter Marron
 
 




[jira] [Created] (HADOOP-8997) Upgrade of the .deb Packege removes Hadoop Users (hdfs and mapred) and the hadoop-group

2012-10-31 Thread Ingo Rauschenberg (JIRA)
Ingo Rauschenberg created HADOOP-8997:
-

 Summary: Upgrade of the .deb Packege removes Hadoop Users (hdfs 
and mapred) and the hadoop-group
 Key: HADOOP-8997
 URL: https://issues.apache.org/jira/browse/HADOOP-8997
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.4, 1.0.3
 Environment: Debian Squeeze 64 Bit
Reporter: Ingo Rauschenberg


During the Package Upgrade the script postrm from the old package is the last 
script that is called.
Because this script is called after the preinst-script of the new package is 
called the Haddop users and group is deleted.

This happens because the Script don't check the parameter which is given to him.

A solution may be to modify the script:
#! /bin/sh

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the License); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an AS IS BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Abort if any command returns an error value
set -e

# This script is called twice during the removal of the package; once
# after the removal of the package's files from the system, and as
# the final step in the removal of this package, after the package's
# conffiles have been removed.

case $1 in
  remove)
# This package has been removed, but its configuration has not yet
# been purged.
/usr/sbin/userdel hdfs 2 /dev/null /dev/null
/usr/sbin/userdel mapred 2 /dev/null /dev/null
/usr/sbin/groupdel hadoop 2 /dev/null dev/null
;;
  purge)
# This package has previously been removed and is now having
# its configuration purged from the system.
/usr/sbin/userdel hdfs 2 /dev/null /dev/null
/usr/sbin/userdel mapred 2 /dev/null /dev/null
/usr/sbin/groupdel hadoop 2 /dev/null dev/null
;;
  disappear)
if test $2 != overwriter; then
  echo $0: undocumented call to \`postrm $*' 12
  exit 1
fi
# This package has been completely overwritten by package $3
# (version $4).  All our files are already gone from the system.
# This is a special case: neither prerm remove nor postrm remove
# have been called, because dpkg didn't know that this package would
# disappear until this stage.
:
;;
  upgrade)
# About to upgrade FROM THIS VERSION to version $2 of this package.
# prerm upgrade has been called for this version, and preinst
# upgrade has been called for the new version.  Last chance to
# clean up.
:
;;
  failed-upgrade)
# About to upgrade from version $2 of this package TO THIS VERSION.
# prerm upgrade has been called for the old version, and preinst
# upgrade has been called for this version.  This is only used if
# the previous version's postrm upgrade couldn't handle it and
# returned non-zero. (Fix old postrm bugs here.)
:
;;
  abort-install)
# Back out of an attempt to install this package.  Undo the effects of
# preinst install  There are two sub-cases.
/usr/sbin/userdel hdfs 2 /dev/null /dev/null
/usr/sbin/userdel mapred 2 /dev/null /dev/null
/usr/sbin/groupdel hadoop 2 /dev/null dev/null
if test ${2+set} = set; then
  # When the install was attempted, version $2's configuration
  # files were still on the system.  Undo the effects of preinst
  # install $2.
  :
else
  # We were being installed from scratch.  Undo the effects of
  # preinst install.
  :
fi ;;
  abort-upgrade)
# Back out of an attempt to upgrade this package from version $2
# TO THIS VERSION.  Undo the effects of preinst upgrade $2.
:
;;
  *) echo $0: didn't understand being called with \`$1' 12
 exit 1;;
esac

exit 0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8996) Error in Hadoop installation

2012-10-31 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans resolved HADOOP-8996.
-

Resolution: Invalid

 Error in Hadoop installation
 

 Key: HADOOP-8996
 URL: https://issues.apache.org/jira/browse/HADOOP-8996
 Project: Hadoop Common
  Issue Type: Bug
 Environment: fedora 15 
Reporter: Shiva

 I am trying to install `Hadoop` on fedora machine by seeing 
 http://hadoop.apache.org/docs/r0.15.2/quickstart.html;
 1. Installed java (and verified whether java exists with `java -version`) and 
 it exists
 2. I had ssh installed(since it is linux)
 3. Downloaded latest version `hadoop 1.0.4` from 
 http://apache.techartifact.com/mirror/hadoop/common/hadoop-1.0.4/;
 I have followed the process shown in installation tutorial(link given above) 
 as below
 $ mkdir input 
 $ cp conf/*.xml input 
 $ bin/hadoop jar hadoop-examples.1.0.4.jar grep input output 'dfs[a-z.]+' 
 Then i had got the following error, which i am unable to understand
 sh-4.2$ bin/hadoop jar hadoop-examples-1.0.4.jar grep input output 
 'dfs[a-z.]+'
 12/10/31 16:14:35 INFO util.NativeCodeLoader: Loaded the native-hadoop 
 library
 12/10/31 16:14:35 WARN snappy.LoadSnappy: Snappy native library not loaded
 12/10/31 16:14:35 INFO mapred.FileInputFormat: Total input paths to 
 process : 8
 12/10/31 16:14:35 INFO mapred.JobClient: Cleaning up the staging area 
 file:/tmp/hadoop-thomas/mapred/staging/thomas-857393825/.staging/job_local_0001
 12/10/31 16:14:35 ERROR security.UserGroupInformation: 
 PriviledgedActionException as:thomas cause:java.io.IOException: Not a file: 
 file:/home/local/thomas/Hadoop/hadoop-1.0.4/input/conf
 java.io.IOException: Not a file: 
 file:/home/local/thomas/Hadoop/hadoop-1.0.4/input/conf
   at 
 org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
   at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:989)
   at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:981)
   at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
   at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:416)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
   at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
   at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:824)
   at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1261)
   at org.apache.hadoop.examples.Grep.run(Grep.java:69)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
   at org.apache.hadoop.examples.Grep.main(Grep.java:93)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at 
 org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
   at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
   at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8999) SASL negotiation is flawed

2012-10-31 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-8999:
---

 Summary: SASL negotiation is flawed
 Key: HADOOP-8999
 URL: https://issues.apache.org/jira/browse/HADOOP-8999
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Daryn Sharp
Assignee: Daryn Sharp


The RPC protocol used for SASL negotiation is flawed.  The server's RPC 
response contains the next SASL challenge token, but a SASL server can return 
null (I'm done) or a N-many byte challenge.  The server currently will not send 
a RPC success response to the client if the SASL server returns null, which 
causes the client to hang until it times out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Why don't the ipc Server use ArrayBlockingQueue for callQueue?

2012-10-31 Thread Todd Lipcon
Hi Luoli

Why would it be more efficient? When suggesting an improvement, it would be
good to back it up with your reasoning.

-Todd

On Wed, Oct 31, 2012 at 1:19 AM, 罗李 luoli...@gmail.com wrote:

 hi everybody:
 I have a little question, why don't the ipc Server in hadoop
 use ArrayBlockingQueue for the callQueue but use LinkedBlockingQueue? Will
 it be more efficiently?

 thanks

 luoli




-- 
Todd Lipcon
Software Engineer, Cloudera


[jira] [Created] (HADOOP-9000) remove Trash.makeTrashRelativePath in branch-1-win

2012-10-31 Thread Brandon Li (JIRA)
Brandon Li created HADOOP-9000:
--

 Summary: remove Trash.makeTrashRelativePath in branch-1-win
 Key: HADOOP-9000
 URL: https://issues.apache.org/jira/browse/HADOOP-9000
 Project: Hadoop Common
  Issue Type: Improvement
  Components: trash
Affects Versions: 1-win
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor


This method does nothing except calling Path.mergePaths(). If the method is 
used to show the purpose of the operation, comment can do as well since it's 
used only in a couple places.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9000) remove Trash.makeTrashRelativePath in branch-1-win

2012-10-31 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li resolved HADOOP-9000.


Resolution: Won't Fix

 remove Trash.makeTrashRelativePath in branch-1-win
 --

 Key: HADOOP-9000
 URL: https://issues.apache.org/jira/browse/HADOOP-9000
 Project: Hadoop Common
  Issue Type: Improvement
  Components: trash
Affects Versions: 1-win
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Attachments: HADOOP-9000.branch-1-win.patch


 This method does nothing except calling Path.mergePaths(). If the method is 
 used to show the purpose of the operation, comment can do as well since it's 
 used only in a couple places.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9001) libhadoop.so links against wrong OpenJDK libjvm.so

2012-10-31 Thread Andy Isaacson (JIRA)
Andy Isaacson created HADOOP-9001:
-

 Summary: libhadoop.so links against wrong OpenJDK libjvm.so
 Key: HADOOP-9001
 URL: https://issues.apache.org/jira/browse/HADOOP-9001
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andy Isaacson
Priority: Minor


After building against OpenJDK 6b24-1.11.4-3 (Debian amd64) using
bq. {{mvn -Pnative,dist clean package -Dmaven.javadoc.skip=true -DskipTests 
-Dtar}}
the resulting binaries {{libhadoop.so}} and {{libhdfs.so}} are linked to the 
wrong {{libjvm.so}}:
{code}
% LD_LIBRARY_PATH=/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64/server ldd 
hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0
linux-vdso.so.1 =  (0x7fff8c7ff000)
libdl.so.2 = /lib/x86_64-linux-gnu/libdl.so.2 (0x7f31df30e000)
libjvm.so.0 = not found
libc.so.6 = /lib/x86_64-linux-gnu/libc.so.6 (0x7f31def86000)
/lib64/ld-linux-x86-64.so.2 (0x7f31df73d000)
{code}
Inspecting the build output it appears that {{JNIFlags.cmake}} decided, 
mysteriously, to link against 
{{/usr/lib/jvm/default-java/jre/lib/amd64/jamvm/libjvm.so}}, based on:
{code}
 [exec] JAVA_HOME=, 
JAVA_JVM_LIBRARY=/usr/lib/jvm/default-java/jre/lib/amd64/jamvm/libjvm.so
 [exec] JAVA_INCLUDE_PATH=/usr/lib/jvm/default-java/include, 
JAVA_INCLUDE_PATH2=/usr/lib/jvm/default-java/include/linux
 [exec] Located all JNI components successfully.
{code}

The jamvm is not mentioned anywhere in my environment or any symlinks in 
/usr, so apparently cmake iterated over the directories in 
{{/usr/lib/jvm/default-java/jre/lib/amd64}} to find it.  The following 
{{libjvm.so}} files are present on this machine:
{code}
-rw-r--r-- 1 root root  1050190 Sep  2 13:38 
/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64/cacao/libjvm.so
-rw-r--r-- 1 root root  1554628 Sep  2 11:21 
/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64/jamvm/libjvm.so
-rw-r--r-- 1 root root 12193850 Sep  2 13:38 
/usr/lib/jvm/java-6-openjdk-amd64/jre/lib/amd64/server/libjvm.so
{code}

Note the difference between {{libjvm.so}} and {{libjvm.so.0}}; the latter seems 
to come from the {{DT_SONAME}} in {{jamvm/libjvm.so}}, but that library seems 
to just be broken since there's no {{libjvm.so.0}} symlink anywhere on the 
filesystem.  I suspect *that* is a bug in OpenJDK but we should just avoid the 
issue by finding the right value for {{JAVA_JVM_LIBRARY}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira