[jira] [Commented] (HADOOP-8062) Incorrect registrant on the Apache Hadoop mailing list is causing a lot of delivery failure return mails

2012-04-28 Thread onder sezgin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264442#comment-13264442
 ] 

onder sezgin commented on HADOOP-8062:
--

I cancelled my subscription and resubscribed. but the issue still stands. 
please help.

> Incorrect registrant on the Apache Hadoop mailing list is causing a lot of 
> delivery failure return mails
> 
>
> Key: HADOOP-8062
> URL: https://issues.apache.org/jira/browse/HADOOP-8062
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Harsh J
>Priority: Blocker
>
> Unsure who has moderation/admin access to mailing list subscriptions for 
> Apache Hadoop but the following address somehow got registered but is not 
> useful for the mailer:
> bq. "NOTES:Stanislav_Seltser/SSP/SUNGARD%SSP-Bedford"@sas.sungardrs.com
> Failure mail that comes for every mail the mailer sends:
> {code}
> Your message
>  Subject: Re: Hadoop Terasort Error- "File _partition.lst does not exist"
> was not delivered to:
>  "NOTES:Stanislav_Seltser/SSP/SUNGARD%SSP-Bedford"@sas.sungardrs.com
> because:
>  The message could not be delivered because the recipient's destination email 
> system is unknown or invalid. Please check the address and try again, or 
> contact your system administrator to verify connectivity to the email system 
> of the recipient. [MAPI Reason Code: 0, MAPI Diagnostic Code 48]
> Final-Recipient: 
> rfc822;"NOTES:Stanislav_Seltser/SSP/SUNGARD%SSP-Bedford"@sas.sungardrs.com
> Action: failed
> Status: 5.0.0
> Diagnostic-Code: X-Notes;The message could not be delivered because the 
> recipient's destination email system is unknown or invalid. Please check the 
> address and try again, or contact your system administrator to verify 
> connectivity to the email system of 
> -- Forwarded message --
> From: 
> To: 
> Cc: 
> Date: Wed, 25 Jan 2012 01:10:21 -0500
> Subject: Re: Hadoop Terasort Error- "File _partition.lst does not exist"
> Apparently, you are running terasort with a local job runner as 
> explained by the presence of "org.apache.hadoop.fs.RawLocalFileSystem" 
> and "LocalJobRunner" in your provided log message.
> {code}
> If this is the wrong place to report this, please do help moving it to right 
> section -- INFRA?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8148) Zero-copy ByteBuffer-based compressor / decompressor API

2012-04-28 Thread Tim Broberg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264402#comment-13264402
 ] 

Tim Broberg commented on HADOOP-8148:
-

Considering my previous comment of 11/Apr/12 23:58 suggesting that the read() 
function should return a buffer rather than filling a buffer provided by the 
caller.

This means that the buffers are owned by the stream layer. The definition 
suggested also implies that the stream layer picks the buffer size, which can 
be good as the stream layer knows what buffer sizes are appropriate for the 
compression algorithms in question.

Is that ok?


> Zero-copy ByteBuffer-based compressor / decompressor API
> 
>
> Key: HADOOP-8148
> URL: https://issues.apache.org/jira/browse/HADOOP-8148
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io
>Reporter: Tim Broberg
>Assignee: Tim Broberg
> Attachments: hadoop8148.patch
>
>
> Per Todd Lipcon's comment in HDFS-2834, "
>   Whenever a native decompression codec is being used, ... we generally have 
> the following copies:
>   1) Socket -> DirectByteBuffer (in SocketChannel implementation)
>   2) DirectByteBuffer -> byte[] (in SocketInputStream)
>   3) byte[] -> Native buffer (set up for decompression)
>   4*) decompression to a different native buffer (not really a copy - 
> decompression necessarily rewrites)
>   5) native buffer -> byte[]
>   with the proposed improvement we can hopefully eliminate #2,#3 for all 
> applications, and #2,#3,and #5 for libhdfs.
> "
> The interfaces in the attached patch attempt to address:
>  A - Compression and decompression based on ByteBuffers (HDFS-2834)
>  B - Zero-copy compression and decompression (HDFS-3051)
>  C - Provide the caller a way to know how the max space required to hold 
> compressed output.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8325) Add a ShutdownHookManager to be used by different components instead of the JVM shutdownhook

2012-04-28 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264398#comment-13264398
 ] 

Alejandro Abdelnur commented on HADOOP-8325:


Not to mention that a thread is not guaranteed to fully run before another one




> Add a ShutdownHookManager to be used by different components instead of the 
> JVM shutdownhook
> 
>
> Key: HADOOP-8325
> URL: https://issues.apache.org/jira/browse/HADOOP-8325
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch
>
>
> FileSystem adds a JVM shutdown hook when a filesystem instance is cached.
> MRAppMaster also uses a JVM shutdown hook, among other things, the 
> MRAppMaster JVM shutdown hook is used to ensure state are written to HDFS.
> This creates a race condition because each JVM shutdown hook is a separate 
> thread and if there are multiple JVM shutdown hooks there is not assurance of 
> order of execution, they could even run in parallel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8325) Add a ShutdownHookManager to be used by different components instead of the JVM shutdownhook

2012-04-28 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264396#comment-13264396
 ] 

Alejandro Abdelnur commented on HADOOP-8325:


@Ravi, thread priority/group/context/privileges does not ensure 
order/serialization of execution given a number of processors/core greater than 
one (and I assume it is not guaranteed even in the case of a single core).

> Add a ShutdownHookManager to be used by different components instead of the 
> JVM shutdownhook
> 
>
> Key: HADOOP-8325
> URL: https://issues.apache.org/jira/browse/HADOOP-8325
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch
>
>
> FileSystem adds a JVM shutdown hook when a filesystem instance is cached.
> MRAppMaster also uses a JVM shutdown hook, among other things, the 
> MRAppMaster JVM shutdown hook is used to ensure state are written to HDFS.
> This creates a race condition because each JVM shutdown hook is a separate 
> thread and if there are multiple JVM shutdown hooks there is not assurance of 
> order of execution, they could even run in parallel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8323) Revert HADOOP-7940 and improve javadocs and test for Text.clear()

2012-04-28 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264375#comment-13264375
 ] 

Todd Lipcon commented on HADOOP-8323:
-

- in the Javadoc use HTML like Note instead of {{*Note*}} (hopefully 
JIRA doesn't mangle my comment here.. saying to use the 'em' HTML tag)
- for the code example, use the 'code' html tag in the javadoc instead of 
double quotes
- for the assertions, I think it's clearer for the provided string to state the 
expected behavior instead of the error behavior. for example: "String should 
empty after clear()" instead of "String isn't empty after clear"


> Revert HADOOP-7940 and improve javadocs and test for Text.clear()
> -
>
> Key: HADOOP-8323
> URL: https://issues.apache.org/jira/browse/HADOOP-8323
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Critical
>  Labels: performance
> Attachments: HADOOP-8323.patch
>
>
> Per [~jdonofrio]'s comments on HADOOP-7940, we should revert it as it has 
> caused a performance regression (for scenarios where Text is reused, popular 
> in MR).
> The clear() works as intended, as the API also offers a current length API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8325) Add a ShutdownHookManager to be used by different components instead of the JVM shutdownhook

2012-04-28 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264354#comment-13264354
 ] 

Ravi Prakash commented on HADOOP-8325:
--

Hey Tucu! What functionality does this Shutdown Hook Manager provide that the 
JVM shutdown hook manager does not? 
http://docs.oracle.com/javase/1.4.2/docs/guide/lang/hook-design.html states
bq. The thread can be created in the proper thread group, given the correct 
priority, context, and privileges, and so forth. 
We can set priorities of the existing shutdown thread and since the JVM uses a 
preemptive, priority based scheduling algorithm, they will in essence run in 
order of priorities (two threads may have the same priorities but that is 
besides the point).

> Add a ShutdownHookManager to be used by different components instead of the 
> JVM shutdownhook
> 
>
> Key: HADOOP-8325
> URL: https://issues.apache.org/jira/browse/HADOOP-8325
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch
>
>
> FileSystem adds a JVM shutdown hook when a filesystem instance is cached.
> MRAppMaster also uses a JVM shutdown hook, among other things, the 
> MRAppMaster JVM shutdown hook is used to ensure state are written to HDFS.
> This creates a race condition because each JVM shutdown hook is a separate 
> thread and if there are multiple JVM shutdown hooks there is not assurance of 
> order of execution, they could even run in parallel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8325) Add a ShutdownHookManager to be used by different components instead of the JVM shutdownhook

2012-04-28 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264347#comment-13264347
 ] 

Alejandro Abdelnur commented on HADOOP-8325:


TestSequenceFile failure seems unrelated.

> Add a ShutdownHookManager to be used by different components instead of the 
> JVM shutdownhook
> 
>
> Key: HADOOP-8325
> URL: https://issues.apache.org/jira/browse/HADOOP-8325
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch
>
>
> FileSystem adds a JVM shutdown hook when a filesystem instance is cached.
> MRAppMaster also uses a JVM shutdown hook, among other things, the 
> MRAppMaster JVM shutdown hook is used to ensure state are written to HDFS.
> This creates a race condition because each JVM shutdown hook is a separate 
> thread and if there are multiple JVM shutdown hooks there is not assurance of 
> order of execution, they could even run in parallel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8325) Add a ShutdownHookManager to be used by different components instead of the JVM shutdownhook

2012-04-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264335#comment-13264335
 ] 

Hadoop QA commented on HADOOP-8325:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12524980/HADOOP-8325.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.io.TestSequenceFile

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/903//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/903//console

This message is automatically generated.

> Add a ShutdownHookManager to be used by different components instead of the 
> JVM shutdownhook
> 
>
> Key: HADOOP-8325
> URL: https://issues.apache.org/jira/browse/HADOOP-8325
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch
>
>
> FileSystem adds a JVM shutdown hook when a filesystem instance is cached.
> MRAppMaster also uses a JVM shutdown hook, among other things, the 
> MRAppMaster JVM shutdown hook is used to ensure state are written to HDFS.
> This creates a race condition because each JVM shutdown hook is a separate 
> thread and if there are multiple JVM shutdown hooks there is not assurance of 
> order of execution, they could even run in parallel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8305) distcp over viewfs is broken

2012-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264333#comment-13264333
 ] 

Hudson commented on HADOOP-8305:


Integrated in Hadoop-Mapreduce-trunk #1063 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1063/])
HADOOP-8305. distcp over viewfs is broken (John George via bobby) (Revision 
1331440)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1331440
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyMapper.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCp.java


> distcp over viewfs is broken
> 
>
> Key: HADOOP-8305
> URL: https://issues.apache.org/jira/browse/HADOOP-8305
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.3, 2.0.0, 3.0.0
>Reporter: John George
>Assignee: John George
> Fix For: 0.23.3, 2.0.0, 3.0.0
>
> Attachments: HADOOP-8305.patch, HADOOP-8305.patch
>
>
> This is similar to MAPREDUCE-4133. distcp over viewfs is broken because 
> getDefaultReplication/BlockSize are being requested with no arguments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8325) Add a ShutdownHookManager to be used by different components instead of the JVM shutdownhook

2012-04-28 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8325:
---

Attachment: HADOOP-8325.patch

re-uploading to see if jenkins picks it up

> Add a ShutdownHookManager to be used by different components instead of the 
> JVM shutdownhook
> 
>
> Key: HADOOP-8325
> URL: https://issues.apache.org/jira/browse/HADOOP-8325
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch
>
>
> FileSystem adds a JVM shutdown hook when a filesystem instance is cached.
> MRAppMaster also uses a JVM shutdown hook, among other things, the 
> MRAppMaster JVM shutdown hook is used to ensure state are written to HDFS.
> This creates a race condition because each JVM shutdown hook is a separate 
> thread and if there are multiple JVM shutdown hooks there is not assurance of 
> order of execution, they could even run in parallel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8323) Revert HADOOP-7940 and improve javadocs and test for Text.clear()

2012-04-28 Thread Jim Donofrio (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264318#comment-13264318
 ] 

Jim Donofrio commented on HADOOP-8323:
--

+1 patch looks good to me

> Revert HADOOP-7940 and improve javadocs and test for Text.clear()
> -
>
> Key: HADOOP-8323
> URL: https://issues.apache.org/jira/browse/HADOOP-8323
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Critical
>  Labels: performance
> Attachments: HADOOP-8323.patch
>
>
> Per [~jdonofrio]'s comments on HADOOP-7940, we should revert it as it has 
> caused a performance regression (for scenarios where Text is reused, popular 
> in MR).
> The clear() works as intended, as the API also offers a current length API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8305) distcp over viewfs is broken

2012-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264315#comment-13264315
 ] 

Hudson commented on HADOOP-8305:


Integrated in Hadoop-Hdfs-trunk #1028 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1028/])
HADOOP-8305. distcp over viewfs is broken (John George via bobby) (Revision 
1331440)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1331440
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyMapper.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCp.java


> distcp over viewfs is broken
> 
>
> Key: HADOOP-8305
> URL: https://issues.apache.org/jira/browse/HADOOP-8305
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.3, 2.0.0, 3.0.0
>Reporter: John George
>Assignee: John George
> Fix For: 0.23.3, 2.0.0, 3.0.0
>
> Attachments: HADOOP-8305.patch, HADOOP-8305.patch
>
>
> This is similar to MAPREDUCE-4133. distcp over viewfs is broken because 
> getDefaultReplication/BlockSize are being requested with no arguments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8305) distcp over viewfs is broken

2012-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264310#comment-13264310
 ] 

Hudson commented on HADOOP-8305:


Integrated in Hadoop-Hdfs-0.23-Build #241 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/241/])
svn merge -c 1331440. FIXES: HADOOP-8305. distcp over viewfs is broken 
(John George via bobby) (Revision 1331443)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1331443
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCp.java
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyMapper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCp.java


> distcp over viewfs is broken
> 
>
> Key: HADOOP-8305
> URL: https://issues.apache.org/jira/browse/HADOOP-8305
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.3, 2.0.0, 3.0.0
>Reporter: John George
>Assignee: John George
> Fix For: 0.23.3, 2.0.0, 3.0.0
>
> Attachments: HADOOP-8305.patch, HADOOP-8305.patch
>
>
> This is similar to MAPREDUCE-4133. distcp over viewfs is broken because 
> getDefaultReplication/BlockSize are being requested with no arguments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8062) Incorrect registrant on the Apache Hadoop mailing list is causing a lot of delivery failure return mails

2012-04-28 Thread onder sezgin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

onder sezgin updated HADOOP-8062:
-

Priority: Blocker  (was: Trivial)

> Incorrect registrant on the Apache Hadoop mailing list is causing a lot of 
> delivery failure return mails
> 
>
> Key: HADOOP-8062
> URL: https://issues.apache.org/jira/browse/HADOOP-8062
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Harsh J
>Priority: Blocker
>
> Unsure who has moderation/admin access to mailing list subscriptions for 
> Apache Hadoop but the following address somehow got registered but is not 
> useful for the mailer:
> bq. "NOTES:Stanislav_Seltser/SSP/SUNGARD%SSP-Bedford"@sas.sungardrs.com
> Failure mail that comes for every mail the mailer sends:
> {code}
> Your message
>  Subject: Re: Hadoop Terasort Error- "File _partition.lst does not exist"
> was not delivered to:
>  "NOTES:Stanislav_Seltser/SSP/SUNGARD%SSP-Bedford"@sas.sungardrs.com
> because:
>  The message could not be delivered because the recipient's destination email 
> system is unknown or invalid. Please check the address and try again, or 
> contact your system administrator to verify connectivity to the email system 
> of the recipient. [MAPI Reason Code: 0, MAPI Diagnostic Code 48]
> Final-Recipient: 
> rfc822;"NOTES:Stanislav_Seltser/SSP/SUNGARD%SSP-Bedford"@sas.sungardrs.com
> Action: failed
> Status: 5.0.0
> Diagnostic-Code: X-Notes;The message could not be delivered because the 
> recipient's destination email system is unknown or invalid. Please check the 
> address and try again, or contact your system administrator to verify 
> connectivity to the email system of 
> -- Forwarded message --
> From: 
> To: 
> Cc: 
> Date: Wed, 25 Jan 2012 01:10:21 -0500
> Subject: Re: Hadoop Terasort Error- "File _partition.lst does not exist"
> Apparently, you are running terasort with a local job runner as 
> explained by the presence of "org.apache.hadoop.fs.RawLocalFileSystem" 
> and "LocalJobRunner" in your provided log message.
> {code}
> If this is the wrong place to report this, please do help moving it to right 
> section -- INFRA?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8062) Incorrect registrant on the Apache Hadoop mailing list is causing a lot of delivery failure return mails

2012-04-28 Thread onder sezgin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13264291#comment-13264291
 ] 

onder sezgin commented on HADOOP-8062:
--

I have got a similar problem. I can not send any email to  the user group. 
Please help!

> Incorrect registrant on the Apache Hadoop mailing list is causing a lot of 
> delivery failure return mails
> 
>
> Key: HADOOP-8062
> URL: https://issues.apache.org/jira/browse/HADOOP-8062
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Harsh J
>Priority: Trivial
>
> Unsure who has moderation/admin access to mailing list subscriptions for 
> Apache Hadoop but the following address somehow got registered but is not 
> useful for the mailer:
> bq. "NOTES:Stanislav_Seltser/SSP/SUNGARD%SSP-Bedford"@sas.sungardrs.com
> Failure mail that comes for every mail the mailer sends:
> {code}
> Your message
>  Subject: Re: Hadoop Terasort Error- "File _partition.lst does not exist"
> was not delivered to:
>  "NOTES:Stanislav_Seltser/SSP/SUNGARD%SSP-Bedford"@sas.sungardrs.com
> because:
>  The message could not be delivered because the recipient's destination email 
> system is unknown or invalid. Please check the address and try again, or 
> contact your system administrator to verify connectivity to the email system 
> of the recipient. [MAPI Reason Code: 0, MAPI Diagnostic Code 48]
> Final-Recipient: 
> rfc822;"NOTES:Stanislav_Seltser/SSP/SUNGARD%SSP-Bedford"@sas.sungardrs.com
> Action: failed
> Status: 5.0.0
> Diagnostic-Code: X-Notes;The message could not be delivered because the 
> recipient's destination email system is unknown or invalid. Please check the 
> address and try again, or contact your system administrator to verify 
> connectivity to the email system of 
> -- Forwarded message --
> From: 
> To: 
> Cc: 
> Date: Wed, 25 Jan 2012 01:10:21 -0500
> Subject: Re: Hadoop Terasort Error- "File _partition.lst does not exist"
> Apparently, you are running terasort with a local job runner as 
> explained by the presence of "org.apache.hadoop.fs.RawLocalFileSystem" 
> and "LocalJobRunner" in your provided log message.
> {code}
> If this is the wrong place to report this, please do help moving it to right 
> section -- INFRA?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira