[ https://issues.apache.org/jira/browse/NIFI-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15990772#comment-15990772 ]
Mark Payne commented on NIFI-3739: ---------------------------------- [~joewitt] - thanks for review again! Will certainly address the issue of referencing the old flowfile. Re: #2 - I do agree that if we output each message as an individual FlowFile it will have poorer performance. However, I am very inclined to use that as the option anyway, for now, as I don't know a great alternative. We could look at constructing some sort of BytesRecord as noted, but that can get messy because once we've extract the bytes we still need another reader to handle making sense of those bytes. I think there are some good options going forward but will take some thought. For now, I want to output them to a 'parse.failure' relationship as individual FlowFiles, because we are close to the 1.2.0 release and I think this is a reasonable option, at least for the short-term. We can always add a Property to allow the user to choose a different option in the future. > Create Processors for publishing records to and consuming records from Kafka > ---------------------------------------------------------------------------- > > Key: NIFI-3739 > URL: https://issues.apache.org/jira/browse/NIFI-3739 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions > Reporter: Mark Payne > Assignee: Mark Payne > Fix For: 1.2.0 > > > With the new record readers & writers that have been added in now, it would > be good to allow records to be pushed to and pulled from kafka. Currently, we > support demarcated data but sometimes we can't correctly demarcate data in a > way that keeps the format valid (json is a good example). We should have > processors that use the record readers and writers for this. -- This message was sent by Atlassian JIRA (v6.3.15#6346)