[ 
https://issues.apache.org/jira/browse/NIFI-2553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15419067#comment-15419067
 ] 

ASF GitHub Bot commented on NIFI-2553:
--------------------------------------

Github user YolandaMDavis commented on a diff in the pull request:

    https://github.com/apache/nifi/pull/843#discussion_r74617540
  
    --- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/AbstractHadoopProcessor.java
 ---
    @@ -90,20 +90,55 @@ public String toString() {
             }
         }
     
    +    // validator to ensure that valid EL expressions are used in the 
directory property
    +    static final Validator PATH_WITH_EL_VALIDATOR = new Validator() {
    --- End diff --
    
    Wondering if this could be in the standard validators class instead?  
There's an  ATTRIBUTE_EXPRESSION_LANGUAGE_VALIDATOR  that's unused but looks 
very similar, just doesn't have the non empty requirement. Seems like a few 
tweaks could make it more generic with broad application. Thoughts?


> HDFS processors throwing exception from OnSchedule when directory is an 
> invalid URI
> -----------------------------------------------------------------------------------
>
>                 Key: NIFI-2553
>                 URL: https://issues.apache.org/jira/browse/NIFI-2553
>             Project: Apache NiFi
>          Issue Type: Bug
>    Affects Versions: 1.0.0, 0.7.0
>            Reporter: Bryan Bende
>            Assignee: Bryan Bende
>            Priority: Minor
>             Fix For: 1.0.0
>
>
> If you enter a directory string that results in an invalid URI, the HDFS 
> processors will throw an unexpected exception from OnScheduled because of a 
> logging statement on in AbstractHadoopProcessor:
> {code}
> getLogger().info("Initialized a new HDFS File System with working dir: {} 
> default block size: {} default replication: {} config: {}",
>                     new Object[] { fs.getWorkingDirectory(), 
> fs.getDefaultBlockSize(new Path(dir)), fs.getDefaultReplication(new 
> Path(dir)), config.toString() });
> {code}
> An example input for the directory that can produce this problem:
> data_${literal('testing'):substring(0,4)%7D
> In addition to this, FetchHDFS, ListHDFS, GetHDFS, and PutHDFS all create new 
> Path instances in their onTrigger methods from the same directory, outside of 
> a try/catch which would result in throwing a ProcessException (if it got past 
> the logging issue above).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to