Github user gauravgopi123 commented on a diff in the pull request:

    
https://github.com/apache/incubator-apex-malhar/pull/154#discussion_r48803387
  
    --- Diff: 
contrib/src/main/java/com/datatorrent/contrib/parser/CsvParser.java ---
    @@ -19,42 +19,49 @@
     package com.datatorrent.contrib.parser;
     
     import java.io.IOException;
    -import java.util.ArrayList;
    +import java.util.List;
    +import java.util.Map;
     
     import javax.validation.constraints.NotNull;
     
     import org.slf4j.Logger;
     import org.slf4j.LoggerFactory;
    -import org.supercsv.cellprocessor.Optional;
    -import org.supercsv.cellprocessor.ParseBool;
    -import org.supercsv.cellprocessor.ParseChar;
    -import org.supercsv.cellprocessor.ParseDate;
    -import org.supercsv.cellprocessor.ParseDouble;
    -import org.supercsv.cellprocessor.ParseInt;
    -import org.supercsv.cellprocessor.ParseLong;
     import org.supercsv.cellprocessor.ift.CellProcessor;
    +import org.supercsv.exception.SuperCsvException;
     import org.supercsv.io.CsvBeanReader;
    +import org.supercsv.io.CsvMapReader;
     import org.supercsv.prefs.CsvPreference;
    +
    +import org.apache.commons.lang.StringUtils;
     import org.apache.hadoop.classification.InterfaceStability;
     
    -import com.datatorrent.api.Context;
    +import com.datatorrent.api.AutoMetric;
     import com.datatorrent.api.Context.OperatorContext;
    +import com.datatorrent.api.DefaultOutputPort;
    +import com.datatorrent.contrib.parser.DelimitedSchema.Field;
     import com.datatorrent.lib.parser.Parser;
    +import com.datatorrent.lib.util.KeyValPair;
     import com.datatorrent.lib.util.ReusableStringReader;
    -import com.datatorrent.netlet.util.DTThrowable;
     
     /**
    - * Operator that converts CSV string to Pojo <br>
    + * Operator that parses a delimited tuple against a specified schema <br>
    + * Schema is specified in a json format as per {@link DelimitedSchema} that
    + * contains field information and constraints for each field <br>
      * Assumption is that each field in the delimited data should map to a 
simple
      * java type.<br>
      * <br>
      * <b>Properties</b> <br>
    - * <b>fieldInfo</b>:User need to specify fields and their types as a comma
    - * separated string having format &lt;NAME&gt;:&lt;TYPE&gt;|&lt;FORMAT&gt; 
in
    - * the same order as incoming data. FORMAT refers to dates with dd/mm/yyyy 
as
    - * default e.g 
name:string,dept:string,eid:integer,dateOfJoining:date|dd/mm/yyyy <br>
    - * <b>fieldDelimiter</b>: Default is comma <br>
    - * <b>lineDelimiter</b>: Default is '\r\n'
    + * <b>schemaPath</b>:Complete path of schema file in HDFS <br>
    + * <b>clazz</b>:Pojo class <br>
    + * <b>Ports</b> <br>
    --- End diff --
    
    Port names mentioned below don't match actual variable names. Port names 
are `in` for input, `out` for pojo, `err` for error and `parsedOutput` for  map


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to