[ 
https://issues.apache.org/jira/browse/NIFI-4639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16282346#comment-16282346
 ] 

ASF GitHub Bot commented on NIFI-4639:
--------------------------------------

Github user joewitt commented on the issue:

    https://github.com/apache/nifi/pull/2292
  
    Ok, so slower is really bad.  But incorrect is even worse.  Perhaps we 
should accept this as-is then make a new JIRA to improve performance.  Do you 
agree?


> PublishKafkaRecord with Avro writer: schema lost from output
> ------------------------------------------------------------
>
>                 Key: NIFI-4639
>                 URL: https://issues.apache.org/jira/browse/NIFI-4639
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Extensions
>    Affects Versions: 1.4.0
>            Reporter: Matthew Silverman
>         Attachments: Demo_Names_NiFi_bug.xml
>
>
> I have a {{PublishKafkaRecord_0_10}} configured with an 
> {{AvroRecordSetWriter}}, in turn configured to "Embed Avro Schema".  However, 
> when I consume data from the Kafka stream I recieve individual records that 
> lack a schema header.
> As a workaround, I can send the flow files through a {{SplitRecord}} 
> processor, which does embed the Avro schema into each resulting flow file.
> Comparing the code for {{SplitRecord}} and the {{PublishKafkaRecord}} 
> processors, I believe the issue is that {{PublisherLease}} wipes the output 
> stream after calling {{createWriter}}; however it is 
> {{AvroRecordSetWriter#createWriter}} that writes the Avro header to the 
> output stream.  {{SplitRecord}}, on the other hand, creates a new writer for 
> each output record.
> I've attached my flow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to