[ 
https://issues.apache.org/jira/browse/NIFI-3658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15972623#comment-15972623
 ] 

ASF GitHub Bot commented on NIFI-3658:
--------------------------------------

Github user mattyb149 commented on a diff in the pull request:

    https://github.com/apache/nifi/pull/1668#discussion_r111946539
  
    --- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ConvertRecord.java
 ---
    @@ -0,0 +1,158 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.nifi.processors.standard;
    +
    +import java.io.IOException;
    +import java.io.InputStream;
    +import java.io.OutputStream;
    +import java.util.ArrayList;
    +import java.util.HashMap;
    +import java.util.HashSet;
    +import java.util.List;
    +import java.util.Map;
    +import java.util.Set;
    +import java.util.concurrent.atomic.AtomicReference;
    +
    +import org.apache.nifi.annotation.behavior.EventDriven;
    +import org.apache.nifi.annotation.behavior.InputRequirement;
    +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
    +import org.apache.nifi.annotation.behavior.SideEffectFree;
    +import org.apache.nifi.annotation.behavior.SupportsBatching;
    +import org.apache.nifi.annotation.behavior.WritesAttribute;
    +import org.apache.nifi.annotation.behavior.WritesAttributes;
    +import org.apache.nifi.annotation.documentation.CapabilityDescription;
    +import org.apache.nifi.annotation.documentation.Tags;
    +import org.apache.nifi.components.PropertyDescriptor;
    +import org.apache.nifi.flowfile.FlowFile;
    +import org.apache.nifi.flowfile.attributes.CoreAttributes;
    +import org.apache.nifi.processor.AbstractProcessor;
    +import org.apache.nifi.processor.ProcessContext;
    +import org.apache.nifi.processor.ProcessSession;
    +import org.apache.nifi.processor.Relationship;
    +import org.apache.nifi.processor.exception.ProcessException;
    +import org.apache.nifi.processor.io.StreamCallback;
    +import org.apache.nifi.serialization.MalformedRecordException;
    +import org.apache.nifi.serialization.RecordReader;
    +import org.apache.nifi.serialization.RecordSetWriter;
    +import org.apache.nifi.serialization.RecordSetWriterFactory;
    +import org.apache.nifi.serialization.RowRecordReaderFactory;
    +import org.apache.nifi.serialization.WriteResult;
    +
    +@EventDriven
    +@SupportsBatching
    +@InputRequirement(Requirement.INPUT_REQUIRED)
    +@SideEffectFree
    +@Tags({"convert", "generic", "schema", "json", "csv", "avro", "log", 
"logs", "freeform", "text"})
    +@WritesAttributes({
    +    @WritesAttribute(attribute = "mime.type", description = "Sets the 
mime.type attribute to the MIME Type specified by the Record Writer"),
    +    @WritesAttribute(attribute = "record.count", description = "The number 
of records in the FlowFile")
    +})
    +@CapabilityDescription("Converts records from one data format to another 
using configured Record Reader and Record Write Controller Services. "
    +    + "The Reader and Writer must be configured with \"matching\" schemas. 
By this, we mean the schemas must have the same field names. The types of the 
fields "
    +    + "do not have to be the same if a field value can be coerced from one 
format to another. For instance, if the input schema has a field named 
\"balance\" of type double, "
    +    + "the output schema can have a field named \"balance\" with a type of 
string, double, or float. If any field is present in the input that is not 
present in the output, "
    +    + "the field will be left out of the output. If any field is specified 
in the output schema but is not present in the input data/schema, then the 
field will not be "
    +    + "present in the output.")
    +public class ConvertRecord extends AbstractProcessor {
    +
    +    static final PropertyDescriptor RECORD_READER = new 
PropertyDescriptor.Builder()
    +        .name("Record Reader")
    --- End diff --
    
    Many in the community are using (and asking for) the user-friendly name to 
be in .displayName() and a machine-friendly name like 
'convert-record-record-reader' in the .name() property.


> Add Processor capable of converting between different "record formats" with 
> the same schema
> -------------------------------------------------------------------------------------------
>
>                 Key: NIFI-3658
>                 URL: https://issues.apache.org/jira/browse/NIFI-3658
>             Project: Apache NiFi
>          Issue Type: Task
>          Components: Extensions
>            Reporter: Mark Payne
>            Assignee: Mark Payne
>             Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to