[ 
https://issues.apache.org/jira/browse/HDFS-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14072393#comment-14072393
 ] 

Hudson commented on HDFS-6455:
------------------------------

FAILURE: Integrated in Hadoop-trunk-Commit #5955 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5955/])
HDFS-6455. NFS: Exception should be added in NFS log for invalid separator in 
nfs.exports.allowed.hosts. Contributed by Abhiraj Butala (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612947)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> NFS: Exception should be added in NFS log for invalid separator in 
> nfs.exports.allowed.hosts
> --------------------------------------------------------------------------------------------
>
>                 Key: HDFS-6455
>                 URL: https://issues.apache.org/jira/browse/HDFS-6455
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: nfs
>    Affects Versions: 2.2.0
>            Reporter: Yesha Vora
>            Assignee: Abhiraj Butala
>             Fix For: 2.6.0
>
>         Attachments: HDFS-6455.002.patch, HDFS-6455.patch
>
>
> The error for invalid separator in dfs.nfs.exports.allowed.hosts property 
> should be added in nfs log file instead nfs.out file.
> Steps to reproduce:
> 1. Pass invalid separator in dfs.nfs.exports.allowed.hosts
> {noformat}
> <property><name>dfs.nfs.exports.allowed.hosts</name><value>host1  ro:host2 
> rw</value></property>
> {noformat}
> 2. restart NFS server. NFS server fails to start and print exception console.
> {noformat}
> [hrt_qa@host1 hwqe]$ ssh -o StrictHostKeyChecking=no -o 
> UserKnownHostsFile=/dev/null host1 "sudo su - -c 
> \"/usr/lib/hadoop/sbin/hadoop-daemon.sh start nfs3\" hdfs"
> starting nfs3, logging to /tmp/log/hadoop/hdfs/hadoop-hdfs-nfs3-horst1.out
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Exception in thread "main" java.lang.IllegalArgumentException: Incorrectly 
> formatted line 'host1 ro:host2 rw'
>       at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
>       at org.apache.hadoop.nfs.NfsExports.<init>(NfsExports.java:151)
>       at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
>       at 
> org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.<init>(RpcProgramNfs3.java:176)
>       at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.<init>(Nfs3.java:43)
>       at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
> {noformat}
> NFS log does not print any error message. It directly shuts down. 
> {noformat}
> STARTUP_MSG:   java = 1.6.0_31
> ************************************************************/
> 2014-05-27 18:47:13,972 INFO  nfs3.Nfs3Base (SignalLogger.java:register(91)) 
> - registered UNIX signal handlers for [TERM, HUP, INT]
> 2014-05-27 18:47:14,169 INFO  nfs3.IdUserGroup 
> (IdUserGroup.java:updateMapInternal(159)) - Updated user map size:259
> 2014-05-27 18:47:14,179 INFO  nfs3.IdUserGroup 
> (IdUserGroup.java:updateMapInternal(159)) - Updated group map size:73
> 2014-05-27 18:47:14,192 INFO  nfs3.Nfs3Base (StringUtils.java:run(640)) - 
> SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down Nfs3 at 
> {noformat}
> NFS.out file has exception.
> {noformat}
> EPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Exception in thread "main" java.lang.IllegalArgumentException: Incorrectly 
> formatted line 'host1 ro:host2 rw'
>         at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
>         at org.apache.hadoop.nfs.NfsExports.<init>(NfsExports.java:151)
>         at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
>         at 
> org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.<init>(RpcProgramNfs3.java:176)
>         at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.<init>(Nfs3.java:43)
>         at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
> ulimit -a for user hdfs
> core file size          (blocks, -c) 409600
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 188893
> max locked memory       (kbytes, -l) unlimited
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 32768
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 10240
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 65536
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to