[jira] [Created] (HADOOP-7396) The information returned by the wrong usage of the command "hadoop job -events <#-of-events>" is not appropriate

2011-06-15 Thread Yan Jinshuang (JIRA)
The information returned by the wrong usage of the command "hadoop job -events 
  <#-of-events>" is not appropriate


 Key: HADOOP-7396
 URL: https://issues.apache.org/jira/browse/HADOOP-7396
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.23.0
Reporter: Yan Jinshuang
Priority: Minor
 Fix For: 0.23.0


With wrong value of from-event-# and #-of-events, though the from-events-# 
after the #-of-events for example from 1000 to 1, the command always return 
0.It is expected to show detailed information, like "the start number should be 
less than the end number for range of events".

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7397) Allow configurable timeouts when connecting to HDFS via java FileSystem API

2011-06-15 Thread Scott Fines (JIRA)
Allow configurable timeouts when connecting to HDFS via java FileSystem API
---

 Key: HADOOP-7397
 URL: https://issues.apache.org/jira/browse/HADOOP-7397
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.20.2
 Environment: Any
Reporter: Scott Fines
Priority: Minor


If the NameNode is not available (in, for example, a network partition event 
separating the client from the NameNode), and an attempt is made to connect, 
then the FileSystem api will *eventually* timeout and throw an error. However, 
that timeout is currently hardcoded to be 20 seconds to connect, with 45 
retries, for a total of a 15 minute wait before failure. While in many 
circumstances this is fine, there are also many circumstances (such as booting 
a service) where both the connection timeout and the number of retries should 
be significantly less, so as not to harm availability of other services.

Investigating Client.java, I see that there are two fields in Connection: 
maxRetries and rpcTimeout. I propose either re-using those fields for 
initiating the connection as well; alternatively, using the already existing 
dfs.socket.timeout parameter to set the connection timeout on initialization, 
and potentially adding a new field such as dfs.connection.retries with a 
default of 45 to replicate current behaviors.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-3436) Useless synchronized in JobTracker

2011-06-15 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-3436.
-

Resolution: Not A Problem

Does not appear to be a problem w.r.t. trunk. There is no such variable held (a 
collection is used instead, and that requires to hold JT lock and is 
synchronized (per comments)).

Resolving as "Not a problem" (anymore). Stale issue.

> Useless synchronized in JobTracker
> --
>
> Key: HADOOP-3436
> URL: https://issues.apache.org/jira/browse/HADOOP-3436
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brice Arnould
>Assignee: Brice Arnould
>Priority: Trivial
>
> In the original code, numTaskTrackers is fetch in a synchronized way, which 
> is useless because anyway it might be change during the running of the 
> algorithm.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7106) Re-organize hadoop subversion layout

2011-06-15 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HADOOP-7106.
-

  Resolution: Fixed
Hadoop Flags: [Reviewed]

I think all the pieces of this are complete now, so marking resolved. Thanks to 
the many people who contributed: Nigel, Owen, Doug, Ian, Jukka, etc.

> Re-organize hadoop subversion layout
> 
>
> Key: HADOOP-7106
> URL: https://issues.apache.org/jira/browse/HADOOP-7106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Nigel Daley
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.22.0
>
> Attachments: HADOOP-7106-auth.patch, HADOOP-7106-auth.patch, 
> HADOOP-7106-auth.patch, HADOOP-7106-git.sh, HADOOP-7106-git.sh, 
> HADOOP-7106.sh, HADOOP-7106.sh, HADOOP-7106.sh, HADOOP-7106.sh, 
> HADOOP-7106.sh, HADOOP-7106.sh, HADOOP-7106.sh, HADOOP-7106.sh, 
> gitk-example.png, mailer-conf.diff
>
>
> As discussed on general@ at http://tinyurl.com/4q6lhxm

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Review Request: HDFS-1788 FsShell ls: Show symlinks properties

2011-06-15 Thread Bochun Bai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/908/
---

Review request for hadoop-common.


Summary
---

HDFS-1788 FsShell ls: Show symlinks properties
I need some suggestions about:
  1 Should PathData hold a FileSystem also?
  2 The symlink target is not exists or permission deny, should ls -L shows a 
blink text? It seems no way get a FileStatus of link target without access it.


Diffs
-

  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/FileContext.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/FsShell.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/FsShellPermissions.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/LocalFileSystem.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/Command.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/CopyCommands.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/Count.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/Delete.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/Display.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/FsUsage.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/Ls.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/Mkdir.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/MoveCommands.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/PathData.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/SetReplication.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/Stat.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/Tail.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/Test.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/java/org/apache/hadoop/fs/shell/Touchz.java
 1135949 
  
http://svn.apache.org/repos/asf/hadoop/common/trunk/common/src/test/core/org/apache/hadoop/fs/shell/TestPathData.java
 1135949 

Diff: https://reviews.apache.org/r/908/diff


Testing
---


Thanks,

Bochun



[jira] [Created] (HADOOP-7398) create a mechanism to suppress the HADOOP_HOME deprecated warning

2011-06-15 Thread Owen O'Malley (JIRA)
create a mechanism to suppress the HADOOP_HOME deprecated warning
-

 Key: HADOOP-7398
 URL: https://issues.apache.org/jira/browse/HADOOP-7398
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Owen O'Malley
Assignee: Owen O'Malley


Create a new mechanism to suppress the warning about HADOOP_HOME deprecation.

I'll create a HADOOP_HOME_WARN_SUPPRESS environment variable that suppresses 
the warning.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hadoop-Common-trunk-Commit - Build # 658 - Failure

2011-06-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/658/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 25624 lines...]

jar:
  [tar] Nothing to do: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/classes/bin.tgz
 is up to date.
  [jar] Building jar: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/hadoop-common-0.23.0-SNAPSHOT.jar
  [jar] Building jar: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/hadoop-common-0.23.0-SNAPSHOT-sources.jar

ivy-resolve-test:

ivy-retrieve-test:

generate-test-records:

generate-avro-records:
Trying to override old definition of task schema

generate-avro-protocols:
Trying to override old definition of task schema

compile-core-test:
[javac] Compiling 9 source files to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/test/core/classes
   [clover] Clover Version 3.0.2, built on April 13 2010 (build-790)
   [clover] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
   [clover] Clover: Open Source License registered to Apache.
   [clover] Updating existing database at 
'/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/test/clover/db/hadoop_coverage.db'.
   [clover] Processing files at 1.6 source level.
   [clover] Clover all over. Instrumented 0 files (0 packages).
   [clover] Elapsed time = 0 secs.
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
Trying to override old definition of task paranamer
[paranamer] Generating parameter names from 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/src/test/core
 to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/test/core/classes
   [delete] Deleting directory 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/test/cache
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/test/cache
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/test/cache

run-test-core:
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/test/data
 [copy] Copying 3 files to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/test/webapps
[mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/test/logs
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/test/extraconf
 [copy] Copying 1 file to 
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build-fi/test/extraconf

checkfailure:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Common-trunk-Commit/trunk/build.xml:835:
 Tests failed!

Total time: 9 minutes 18 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
6 tests failed.
REGRESSION:  
org.apache.hadoop.io.file.tfile.TestTFileByteArrays.testOneBlockPlusOneEntry

Error Message:
expected:<1> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1> but was:<0>
at 
org.apache.hadoop.io.file.tfile.TestTFileByteArrays.checkBlockIndex(TestTFileByteArrays.java:665)
at 
org.apache.hadoop.io.file.tfile.TestTFileByteArrays.__CLR3_0_2qcmd5m1ap3(TestTFileByteArrays.java:157)
at 
org.apache.hadoop.io.file.tfile.TestTFileByteArrays.testOneBlockPlusOneEntry(TestTFileByteArrays.java:151)


REGRESSION:  org.apache.hadoop.io.file.tfile.TestTFileByteArrays.testTwoBlocks

Error Message:
expected:<1> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1> but was:<0>
at 
org.apache.hadoop.io.file.tfile.TestTFileByteArrays.checkBlockIndex(TestTFileByteArrays.java:665)
at 
org.apache.hadoop.io.file.tfile.TestTFileByteArrays.__CLR3_0_2sehq6f1apc(TestTFileByteArrays.java:165)
at 
org.apache.hadoop.io.file.tfile.TestTFileByteArrays.testTwoBlocks(TestTFileByteArrays.java:160)


REGRESSION:  org.apache.hadoop.io.file.tfile.TestTFileByteArrays.testThreeBlocks

Error Message:
expected:<2> but was:<1>

Sta

Re: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/fop/messaging/MessageHandler

2011-06-15 Thread Thomas Anderson
Hi Aaron,

Switching to forrest 0.8 solves the problem. Also, thanks for help
file the issue in jira regarding to forrest v0.9.

Thank you very much.

On Wed, Jun 15, 2011 at 10:12 AM, Aaron T. Myers  wrote:
> Also, I've filed https://issues.apache.org/jira/browse/HADOOP-7394 to track
> this issue.
>
> --
> Aaron T. Myers
> Software Engineer, Cloudera
>
>
>
> On Tue, Jun 14, 2011 at 7:06 PM, Aaron T. Myers  wrote:
>
>> Hey Thomas,
>>
>> I believe you're getting this error because you're setting forrest.home to
>> use Forrest 0.9. For some reason I'm not quite sure of (perhaps a Forrest
>> bug?) trying to build the docs with 0.9 gives me the same error.
>>
>> You can grab Forrest 0.8 from here:
>> http://archive.apache.org/dist/forrest/0.8/
>>
>> --
>> Aaron T. Myers
>> Software Engineer, Cloudera
>>
>>
>>
>> On Mon, Jun 13, 2011 at 10:26 PM, Thomas Anderson <
>> t.dt.aander...@gmail.com> wrote:
>>
>>> When compiling source against the latest trunk in common repository,
>>> it throws fop exception, I can't found where to add this lib e.g fop
>>> v1.0.  Any config file needed to be altered so that the compilation
>>> would work?
>>>
>>> Thanks.
>>>
>>>    [exec] Exception in thread "main" java.lang.NoClassDefFoundError:
>>> org/apache/fop/messaging/MessageHandler
>>>     [exec]     at
>>>
>>> org.apache.cocoon.serialization.FOPSerializer.configure(FOPSerializer.java:122)
>>>     [exec]     at
>>>
>>> org.apache.avalon.framework.container.ContainerUtil.configure(ContainerUtil.java:201)
>>>     [exec]     at
>>>
>>> org.apache.avalon.excalibur.component.DefaultComponentFactory.newInstance(DefaultComponentFactory.java:289)
>>>     [exec]     at
>>>
>>> org.apache.avalon.excalibur.pool.InstrumentedResourceLimitingPool.newPoolable(InstrumentedResourceLimitingPool.java:655)
>>>     [exec]     at
>>>
>>> org.apache.avalon.excalibur.pool.InstrumentedResourceLimitingPool.get(InstrumentedResourceLimitingPool.java:371)
>>>     [exec]     at
>>>
>>> org.apache.avalon.excalibur.component.PoolableComponentHandler.doGet(PoolableComponentHandler.java:198)
>>>     [exec]     at
>>>
>>> org.apache.avalon.excalibur.component.ComponentHandler.get(ComponentHandler.java:381)
>>>     [exec]     at
>>>
>>> org.apache.avalon.excalibur.component.ExcaliburComponentSelector.select(ExcaliburComponentSelector.java:215)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.ExtendedComponentSelector.select(ExtendedComponentSelector.java:268)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.pipeline.AbstractProcessingPipeline.setSerializer(AbstractProcessingPipeline.java:311)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.pipeline.impl.AbstractCachingProcessingPipeline.setSerializer(AbstractCachingProcessingPipeline.java:171)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.sitemap.SerializeNode.invoke(SerializeNode.java:120)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.sitemap.SelectNode.invoke(SelectNode.java:103)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:47)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.sitemap.PreparableMatchNode.invoke(PreparableMatchNode.java:131)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.sitemap.PipelineNode.invoke(PipelineNode.java:143)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.sitemap.PipelinesNode.invoke(PipelinesNode.java:93)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.ConcreteTreeProcessor.process(ConcreteTreeProcessor.java:235)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.ConcreteTreeProcessor.process(ConcreteTreeProcessor.java:177)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.TreeProcessor.process(TreeProcessor.java:254)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.sitemap.MountNode.invoke(MountNode.java:118)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.sitemap.SelectNode.invoke(SelectNode.java:98)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
>>>     [exec]     at
>>>
>>> org.apache.cocoon.components.t

automatic monitoring the utilization of slaves

2011-06-15 Thread bikash sharma
Hi -- Is there a way, by which a slave can get a trigger when a Hadoop jobs
finished in master?
The use case is as follows:
I need to monitor the cpu, memory utilization utility automatically. For
which, I need to know the timestamp to start and stop the sar utility
corresponding to the start and finish of Hadoop job at master.
Its simple to do at master, since the Hadoop job runs there, but how we do
for slaves?

Thanks.
Bikash