[jira] [Updated] (HDFS-4201) NPE in BPServiceActor#sendHeartBeat

2013-12-06 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HDFS-4201:
--

Attachment: trunk-4201_v2.patch

Fixed the test failures. Also enhanced the fix a little so that we register the 
block pool after the datanode initialization is done.

 NPE in BPServiceActor#sendHeartBeat
 ---

 Key: HDFS-4201
 URL: https://issues.apache.org/jira/browse/HDFS-4201
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Eli Collins
Assignee: Jimmy Xiang
Priority: Critical
 Fix For: 3.0.0

 Attachments: trunk-4201.patch, trunk-4201_v2.patch


 Saw the following NPE in a log.
 Think this is likely due to {{dn}} or {{dn.getFSDataset()}} being null, (not 
 {{bpRegistration}}) due to a configuration or local directory failure.
 {code}
 2012-09-25 04:33:20,782 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 For namenode svsrs00127/11.164.162.226:8020 using DELETEREPORT_INTERVAL of 
 30 msec  BLOCKREPORT_INTERVAL of 2160msec Initial delay: 0msec; 
 heartBeatInterval=3000
 2012-09-25 04:33:20,782 ERROR 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService 
 for Block pool BP-1678908700-11.164.162.226-1342785481826 (storage id 
 DS-1031100678-11.164.162.251-5010-1341933415989) service to 
 svsrs00127/11.164.162.226:8020
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:434)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:520)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673)
 at java.lang.Thread.run(Thread.java:722)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5639) rpc scheduler abstraction

2013-12-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13842073#comment-13842073
 ] 

Hadoop QA commented on HDFS-5639:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617519/HDFS-5639-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5671//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5671//console

This message is automatically generated.

 rpc scheduler abstraction
 -

 Key: HDFS-5639
 URL: https://issues.apache.org/jira/browse/HDFS-5639
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
 Attachments: HDFS-5639-2.patch, HDFS-5639.patch


 We have run into various issues in namenode and hbase w.r.t. rpc handling in 
 multi-tenant clusters. The examples are
 https://issues.apache.org/jira/i#browse/HADOOP-9640
  https://issues.apache.org/jira/i#browse/HBASE-8836
 There are different ideas on how to prioritize rpc requests. It could be 
 based on user id, or whether it is read request or write request, or it could 
 use specific rule like datanode's RPC is more important than client RPC.
 We want to enable people to implement and experiiment different rpc 
 schedulers.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5640) Add snapshot methods to FileContext.

2013-12-06 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-5640:
---

 Summary: Add snapshot methods to FileContext.
 Key: HDFS-5640
 URL: https://issues.apache.org/jira/browse/HDFS-5640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.2.0, 3.0.0
Reporter: Chris Nauroth


Currently, methods related to HDFS snapshots are defined on {{FileSystem}}.  
For feature parity, these methods need to be added to {{FileContext}}.  This 
would also require updating {{AbstractFileSystem}} and the {{Hdfs}} subclass.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5641) ViewFileSystem should support snapshot methods.

2013-12-06 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-5641:
---

 Summary: ViewFileSystem should support snapshot methods.
 Key: HDFS-5641
 URL: https://issues.apache.org/jira/browse/HDFS-5641
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, snapshots
Affects Versions: 2.2.0, 3.0.0
Reporter: Chris Nauroth


Currently, {{ViewFileSystem}} does not dispatch snapshot methods through the 
mount table.  All snapshot methods throw {{UnsupportedOperationException}}, 
even though the underlying mount points could be HDFS instances that support 
snapshots.  We need to update {{ViewFileSystem}} to implement the snapshot 
methods.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5640) Add snapshot methods to FileContext.

2013-12-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5640:


Component/s: snapshots

 Add snapshot methods to FileContext.
 

 Key: HDFS-5640
 URL: https://issues.apache.org/jira/browse/HDFS-5640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, snapshots
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth

 Currently, methods related to HDFS snapshots are defined on {{FileSystem}}.  
 For feature parity, these methods need to be added to {{FileContext}}.  This 
 would also require updating {{AbstractFileSystem}} and the {{Hdfs}} subclass.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5554) Add Snapshot Feature to INodeFile

2013-12-06 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5554:
-

Hadoop Flags: Reviewed

+1 patch looks good.

 Add Snapshot Feature to INodeFile
 -

 Key: HDFS-5554
 URL: https://issues.apache.org/jira/browse/HDFS-5554
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-5554.001.patch, HDFS-5554.002.patch, 
 HDFS-5554.003.patch


 Similar with HDFS-5285, we can add a FileWithSnapshot feature to INodeFile 
 and use it to replace the current INodeFileWithSnapshot.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4868) Clean up error message when trying to snapshot using ViewFileSystem

2013-12-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13842091#comment-13842091
 ] 

Chris Nauroth commented on HDFS-4868:
-

Hi, [~schu].  I think we ought to support the snapshot methods through 
{{ViewFileSystem}}, so I filed HDFS-5641 for adding that support.  Considering 
that, do you think we can close out HDFS-4868 as Won't Fix?

 Clean up error message when trying to snapshot using ViewFileSystem
 ---

 Key: HDFS-4868
 URL: https://issues.apache.org/jira/browse/HDFS-4868
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 3.0.0
Reporter: Stephen Chu
Priority: Minor

 Snapshots aren't supported for the ViewFileSystem. When users try to create a 
 snapshot, they'll run into a message like the following:
 {code}
 schu-mbp:presentation schu$ hadoop fs -createSnapshot /user/schu
 -createSnapshot: Fatal internal error
 java.lang.UnsupportedOperationException: ViewFileSystem doesn't support 
 createSnapshot
   at org.apache.hadoop.fs.FileSystem.createSnapshot(FileSystem.java:2285)
   at 
 org.apache.hadoop.fs.shell.SnapshotCommands$CreateSnapshot.processArguments(SnapshotCommands.java:87)
   at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:305)
 {code}
 To make things more readable and avoid confusion, it would be helpful to 
 clean up the error message stacktrace and just state that ViewFileSystem 
 doesn't support createSnapshot, similar to what was done in HDFS-4846. The 
 fatal internal error message is a bit scary and it might be useful to 
 remove that message to avoid confusion from operators.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5554) Add Snapshot Feature to INodeFile

2013-12-06 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5554:
-

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Jing!

 Add Snapshot Feature to INodeFile
 -

 Key: HDFS-5554
 URL: https://issues.apache.org/jira/browse/HDFS-5554
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 3.0.0

 Attachments: HDFS-5554.001.patch, HDFS-5554.002.patch, 
 HDFS-5554.003.patch


 Similar with HDFS-5285, we can add a FileWithSnapshot feature to INodeFile 
 and use it to replace the current INodeFileWithSnapshot.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5642) libhdfs Windows compatibility.

2013-12-06 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-5642:
---

 Summary: libhdfs Windows compatibility.
 Key: HDFS-5642
 URL: https://issues.apache.org/jira/browse/HDFS-5642
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 2.2.0, 3.0.0
Reporter: Chris Nauroth


Currently, the libhdfs codebase does not compile on Windows due to use of 
several Linux-specific functions, lack of build script support, and use of C99 
constructs.  The scope of this issue includes converting those function calls 
to cross-platform equivalents (or use {{#ifdef}} if necessary), setting up 
build support, and some code clean-ups to follow the C89 rules.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5643) libhdfs AIX compatibility.

2013-12-06 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-5643:
---

 Summary: libhdfs AIX compatibility.
 Key: HDFS-5643
 URL: https://issues.apache.org/jira/browse/HDFS-5643
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 2.2.0, 3.0.0
Reporter: Chris Nauroth


Currently, libhdfs does not compile on AIX due to use of C99-style comments.  
The scope of this issue is getting the code to build and test on AIX.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5643) libhdfs AIX compatibility.

2013-12-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13842095#comment-13842095
 ] 

Chris Nauroth commented on HDFS-5643:
-

The fix suggested by [~cmccabe] is to pass the -{{qcpluscmt}} flag via 
{{CMakeLists.txt}}.

 libhdfs AIX compatibility.
 --

 Key: HDFS-5643
 URL: https://issues.apache.org/jira/browse/HDFS-5643
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 3.0.0, 2.2.0
Reporter: Chris Nauroth

 Currently, libhdfs does not compile on AIX due to use of C99-style comments.  
 The scope of this issue is getting the code to build and test on AIX.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5644) Reduce lock contention in libhdfs.

2013-12-06 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-5644:
---

 Summary: Reduce lock contention in libhdfs.
 Key: HDFS-5644
 URL: https://issues.apache.org/jira/browse/HDFS-5644
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 2.2.0, 3.0.0
Reporter: Chris Nauroth


libhdfs uses locking internally for coordinating access to shared hash tables 
and the JNI environment.  The scope of this issue is to improve performance of 
libhdfs by reducing lock contention.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5554) Add Snapshot Feature to INodeFile

2013-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13842096#comment-13842096
 ] 

Hudson commented on HDFS-5554:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4848 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4848/])
HDFS-5554. Flatten INodeFile hierarchy: Replace INodeFileWithSnapshot with 
FileWithSnapshotFeature.  Contributed by jing9 (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1548796)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSnapshotPathINodes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestINodeFileUnderConstructionWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java


 Add Snapshot Feature to INodeFile
 -

 Key: HDFS-5554
 URL: https://issues.apache.org/jira/browse/HDFS-5554
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 3.0.0

 Attachments: HDFS-5554.001.patch, HDFS-5554.002.patch, 
 HDFS-5554.003.patch


 Similar with HDFS-5285, we can add a FileWithSnapshot feature to INodeFile 
 and use it to replace the current INodeFileWithSnapshot.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5541) LIBHDFS questions and performance suggestions

2013-12-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-5541.
-

Resolution: Invalid

I've filed issues HDFS-5642, HDFS-5643 and HDFS-5644.  I'm going to resolve 
this one.  Thanks, [~stevebovy] and [~cmccabe].

 LIBHDFS questions and performance suggestions
 -

 Key: HDFS-5541
 URL: https://issues.apache.org/jira/browse/HDFS-5541
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Stephen Bovy
Priority: Minor
 Attachments: pdclibhdfs.zip


 Since libhdfs is a client interface,  and esspecially because it is a C 
 interface , it should be assumed that the code will be used accross many 
 different platforms, and many different compilers.
 1) The code should be cross platform ( no Linux extras )
 2) The code should compile on standard c89 compilers, the
   {least common denominator rule applies here} !!   
 C  code with  c   extension should follow the rules of the c standard  
 All variables must be declared at the begining of scope , and no (//) 
 comments allowed 
  I just spent a week white-washing the code back to nornal C standards so 
  that it could compile and build accross a wide range of platforms  
 Now on-to  performance questions 
 1) If threads are not used why do a thread attach ( when threads are not used 
 all the thread attach nonesense is a waste of time and a performance killer ) 
 2) The JVM  init  code should not be imbedded within the context of every 
 function call   .  The  JVM init code should be in a stand-alone  LIBINIT 
 function that is only invoked once.   The JVM * and the JNI * should be 
 global variables for use when no threads are utilized.  
 3) When threads are utilized the attach fucntion can use the GLOBAL  jvm * 
 created by the LIBINIT  { WHICH IS INVOKED ONLY ONCE } and thus safely 
 outside the scope of any LOOP that is using the functions 
 4) Hash Table and Locking  Why ?
 When threads are used the hash table locking is going to hurt perfromance .  
 Why not use thread local storage for the hash table,that way no locking is 
 required either with or without threads.   
  
 5) FINALLY Windows  Compatibility 
 Do not use posix features if they cannot easilly be replaced on other 
 platforms   !!



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4201) NPE in BPServiceActor#sendHeartBeat

2013-12-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13842102#comment-13842102
 ] 

Hadoop QA commented on HDFS-4201:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12617539/trunk-4201_v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5672//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5672//console

This message is automatically generated.

 NPE in BPServiceActor#sendHeartBeat
 ---

 Key: HDFS-4201
 URL: https://issues.apache.org/jira/browse/HDFS-4201
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Eli Collins
Assignee: Jimmy Xiang
Priority: Critical
 Fix For: 3.0.0

 Attachments: trunk-4201.patch, trunk-4201_v2.patch


 Saw the following NPE in a log.
 Think this is likely due to {{dn}} or {{dn.getFSDataset()}} being null, (not 
 {{bpRegistration}}) due to a configuration or local directory failure.
 {code}
 2012-09-25 04:33:20,782 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
 For namenode svsrs00127/11.164.162.226:8020 using DELETEREPORT_INTERVAL of 
 30 msec  BLOCKREPORT_INTERVAL of 2160msec Initial delay: 0msec; 
 heartBeatInterval=3000
 2012-09-25 04:33:20,782 ERROR 
 org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService 
 for Block pool BP-1678908700-11.164.162.226-1342785481826 (storage id 
 DS-1031100678-11.164.162.251-5010-1341933415989) service to 
 svsrs00127/11.164.162.226:8020
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:434)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:520)
 at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673)
 at java.lang.Thread.run(Thread.java:722)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-06 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-4983:


Attachment: HDFS-4983.006.patch

 Numeric usernames do not work with WebHDFS FS
 -

 Key: HDFS-4983
 URL: https://issues.apache.org/jira/browse/HDFS-4983
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Yongjun Zhang
  Labels: patch
 Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
 HDFS-4983.003.patch, HDFS-4983.004.patch, HDFS-4983.005.patch, 
 HDFS-4983.006.patch, HDFS-4983.006.patch


 Per the file 
 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
 Given this, using a username such as 123 seems to fail for some reason 
 (tried on insecure setup):
 {code}
 [123@host-1 ~]$ whoami
 123
 [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
 -ls: Invalid value: 123 does not belong to the domain 
 ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-12-06 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13842105#comment-13842105
 ] 

Yongjun Zhang commented on HDFS-4983:
-

Thanks Haohui.

I suspect the above test failure is caused by a glitch in the test env, 
resubmitted previous version to trigger another run.


 Numeric usernames do not work with WebHDFS FS
 -

 Key: HDFS-4983
 URL: https://issues.apache.org/jira/browse/HDFS-4983
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Yongjun Zhang
  Labels: patch
 Attachments: HDFS-4983.001.patch, HDFS-4983.002.patch, 
 HDFS-4983.003.patch, HDFS-4983.004.patch, HDFS-4983.005.patch, 
 HDFS-4983.006.patch, HDFS-4983.006.patch


 Per the file 
 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
 Given this, using a username such as 123 seems to fail for some reason 
 (tried on insecure setup):
 {code}
 [123@host-1 ~]$ whoami
 123
 [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
 -ls: Invalid value: 123 does not belong to the domain 
 ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


<    1   2