[ 
https://issues.apache.org/jira/browse/HDFS-15072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17000914#comment-17000914
 ] 

Masatake Iwasaki commented on HDFS-15072:
-----------------------------------------

{quote}FSVolumeImpl.initializeCacheExecutor calls Guava's 
ThreadPoolExecutorBuilder. setNameFormat, passing in the String representation 
of the parent File.
{quote}
This means the scope is not limited to MiniDFSCluster. DataNode does not allow 
path containing {{%}} for {{dfs.datanode.data.dir}}.
{noformat}
$ grep -A1 dfs.datanode.data.dir etc/hadoop/hdfs-site.xml 
    <name>dfs.datanode.data.dir</name>
    <value>/tmp/data%-</value>

$ bin/hdfs datanode
...(snip)
2019-12-20 21:48:51,970 WARN datanode.DataNode: Unexpected exception in block 
pool Block pool <registering> (Datanode Uuid 
e0b42c6d-b28a-4cd7-8e70-82d8a3a8faac) service to localhost/127.0.0.1:8020
java.util.DuplicateFormatFlagsException: Flags = '-'
        at java.util.Formatter$Flags.parse(Formatter.java:4443)
        at java.util.Formatter$FormatSpecifier.flags(Formatter.java:2640)
        at java.util.Formatter$FormatSpecifier.<init>(Formatter.java:2709)
        at java.util.Formatter.parse(Formatter.java:2560)
        at java.util.Formatter.format(Formatter.java:2501)
        at java.util.Formatter.format(Formatter.java:2455)
        at java.lang.String.format(String.java:2981)
        at 
com.google.common.util.concurrent.ThreadFactoryBuilder.format(ThreadFactoryBuilder.java:182)
        at 
com.google.common.util.concurrent.ThreadFactoryBuilder.setNameFormat(ThreadFactoryBuilder.java:70)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.initializeCacheExecutor(FsVolumeImpl.java:208)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.<init>(FsVolumeImpl.java:183)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImplBuilder.build(FsVolumeImplBuilder.java:90)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.addVolume(FsDatasetImpl.java:458)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:348)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
...(snip) 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost/127.0.0.1
************************************************************/
{noformat}

> HDFS MiniCluster fails to start when run in directory path with a %
> -------------------------------------------------------------------
>
>                 Key: HDFS-15072
>                 URL: https://issues.apache.org/jira/browse/HDFS-15072
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.5
>         Environment: I encountered this on a Mac while running an HBase 
> minicluster that was using Hadoop 2.7.5. However, the code looks the same in 
> trunk so it likely affects most or all current versions. 
>            Reporter: Geoffrey Jacoby
>            Priority: Minor
>
> FSVolumeImpl.initializeCacheExecutor calls Guava's ThreadPoolExecutorBuilder. 
> setNameFormat, passing in the String representation of the parent File. Guava 
> will take the String whole and pass it to String.format, which uses % as a 
> special character. That means that if parent.toString() contains a percentage 
> sign, followed by a character that's illegal to use as a formatter in 
> String.format(), you'll get an exception that stops the MiniCluster from 
> starting up. 
> I did not check to see if this would also happen on a normal DataNode daemon. 
> initializeCacheExecutor should escape the parent file name before passing it 
> in. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to