[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2493 on the consume side you would generate a flowfile. But you'd then write to it via the streaming API. #neverbytearray is a movement we should start. ---
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2493 I completely agree on your observation about having to land the entire, unbroken file into NiFi's repo rather than streaming through memory is inefficient. I will talk to the Pulsar team about getting a true streaming API implemented. It is a great suggestion. If we were to write such an API, do you have some examples of how we would leverage it inside NiFi? I thought we had to generate a flow file for our Consumer processor. Is there a way to hand off streams instead? ---
[jira] [Commented] (NIFI-2630) Allow PublishJMS processor to create TextMessages
[ https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381097#comment-16381097 ] ASF GitHub Bot commented on NIFI-2630: -- Github user mosermw commented on the issue: https://github.com/apache/nifi/pull/2458 @markap14 I added a commit to do character set validation in a property validator instead of OnScheduled. > Allow PublishJMS processor to create TextMessages > - > > Key: NIFI-2630 > URL: https://issues.apache.org/jira/browse/NIFI-2630 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.0 >Reporter: James Anderson >Assignee: Michael Moser >Priority: Minor > Labels: patch > Attachments: > 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch > > > Create a new configuration option for PublishJMS that allows the processor to > be configured to emit instances of TextMessages as well as BytesMessage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2458: NIFI-2630 Allow PublishJMS to send TextMessages
Github user mosermw commented on the issue: https://github.com/apache/nifi/pull/2458 @markap14 I added a commit to do character set validation in a property validator instead of OnScheduled. ---
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2493 yes correct it would be awesome if the client/broker interface could support a genuinely streaming interface rather than a byte[] as this helps frameworks that can operate on the streams rather than having objects fully loaded operate in the most GC/efficient manner possible. It means sending a 1 GB object for example never holds more than some small buffer size in memory. If you want to suppor the use case you're describing then nifi has to load the full image/document/etc.. into a byte[] then hand that over to the pulsar interface. So, larger objects become a problem in terms of efficiency/parallelism. no biggie for now but does make that use case a bit less exciting for now :) ---
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2493 The Pulsar client API, https://pulsar.apache.org/api/client/, currently only supports byte[] payloads. By true streaming, I am assuming you mean an API that return an IO stream object of some sort. ---
[GitHub] nifi issue #2430: NIFI-4809 - Implement a SiteToSiteMetricsReportingTask
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2430 I like all those ideas! ---
[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381071#comment-16381071 ] ASF GitHub Bot commented on NIFI-4809: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2430 I like all those ideas! > Implement a SiteToSiteMetricsReportingTask > -- > > Key: NIFI-4809 > URL: https://issues.apache.org/jira/browse/NIFI-4809 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > At the moment there is an AmbariReportingTask to send the NiFi-related > metrics of the host to the Ambari Metrics Service. In a multi-cluster > configuration, or when working with MiNiFi (Java) agents, it might not be > possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS > REST API. > To solve this problem, a solution would be to implement a > SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi > instance/cluster that will be able to publish the metrics into AMS (using > InvokeHTTP). This way, it is possible to have the metrics of all the > instances exposed in one AMS instance. > I propose to send the data formatted as we are doing right now in the Ambari > reporting task. If needed, it can be easily converted into another schema > using the record processors once received via S2S. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381061#comment-16381061 ] ASF GitHub Bot commented on NIFI-4809: -- Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2430 What about adding a property to let the user decide the output format: - "Output format" with two options: "Ambari Metrics Collector format" or "Record based format" - "Record writer": this property being used if and only if the "Record based format" is selected My primary objective was to have this reporting task usable in MiNiFi java agents to send the metrics to a NiFi cluster that would publish the metrics to AMS. But we could definitely use this reporting task to get the metrics and store the data into another store such as Elasticsearch. In any case, I'd add an "Additional details" page to provide information about the schemas. > Implement a SiteToSiteMetricsReportingTask > -- > > Key: NIFI-4809 > URL: https://issues.apache.org/jira/browse/NIFI-4809 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > At the moment there is an AmbariReportingTask to send the NiFi-related > metrics of the host to the Ambari Metrics Service. In a multi-cluster > configuration, or when working with MiNiFi (Java) agents, it might not be > possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS > REST API. > To solve this problem, a solution would be to implement a > SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi > instance/cluster that will be able to publish the metrics into AMS (using > InvokeHTTP). This way, it is possible to have the metrics of all the > instances exposed in one AMS instance. > I propose to send the data formatted as we are doing right now in the Ambari > reporting task. If needed, it can be easily converted into another schema > using the record processors once received via S2S. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2430: NIFI-4809 - Implement a SiteToSiteMetricsReportingTask
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2430 What about adding a property to let the user decide the output format: - "Output format" with two options: "Ambari Metrics Collector format" or "Record based format" - "Record writer": this property being used if and only if the "Record based format" is selected My primary objective was to have this reporting task usable in MiNiFi java agents to send the metrics to a NiFi cluster that would publish the metrics to AMS. But we could definitely use this reporting task to get the metrics and store the data into another store such as Elasticsearch. In any case, I'd add an "Additional details" page to provide information about the schemas. ---
[jira] [Commented] (MINIFICPP-397) Implement RouteOnAttribute
[ https://issues.apache.org/jira/browse/MINIFICPP-397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381058#comment-16381058 ] ASF GitHub Bot commented on MINIFICPP-397: -- Github user minifirocks commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/268#discussion_r171389409 --- Diff: libminifi/src/processors/RouteOnAttribute.cpp --- @@ -0,0 +1,107 @@ +/** + * @file RouteOnAttribute.cpp + * RouteOnAttribute class implementation + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "processors/RouteOnAttribute.h" + +#include +#include +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace processors { + +core::Relationship RouteOnAttribute::Unmatched( +"unmatched", +"Files which do not match any expression are routed here"); +core::Relationship RouteOnAttribute::Failure( +"failure", +"Failed files are transferred to failure"); + +void RouteOnAttribute::initialize() { + std::set properties; + setSupportedProperties(properties); +} + +void RouteOnAttribute::onDynamicPropertyModified(const core::Property _property, + const core::Property _property) { + + // Update the routing table when routes are added via dynamic properties. + route_properties_[new_property.getName()] = new_property; + + std::set relationships; + + for (const auto : route_properties_) { +core::Relationship route_rel{route.first, "Dynamic route"}; +route_rels_[route.first] = route_rel; +relationships.insert(route_rel); +logger_->log_info("RouteOnAttribute registered route '%s' with expression '%s'", + route.first, + route.second.getValue()); + } + + relationships.insert(Unmatched); + relationships.insert(Failure); + setSupportedRelationships(relationships); --- End diff -- OK. > Implement RouteOnAttribute > -- > > Key: MINIFICPP-397 > URL: https://issues.apache.org/jira/browse/MINIFICPP-397 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > RouteOnAttribute is notably missing from MiNiFi - C++ and should be > implemented. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #268: MINIFICPP-397 Added implementation of Rou...
Github user minifirocks commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/268#discussion_r171389409 --- Diff: libminifi/src/processors/RouteOnAttribute.cpp --- @@ -0,0 +1,107 @@ +/** + * @file RouteOnAttribute.cpp + * RouteOnAttribute class implementation + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "processors/RouteOnAttribute.h" + +#include +#include +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace processors { + +core::Relationship RouteOnAttribute::Unmatched( +"unmatched", +"Files which do not match any expression are routed here"); +core::Relationship RouteOnAttribute::Failure( +"failure", +"Failed files are transferred to failure"); + +void RouteOnAttribute::initialize() { + std::set properties; + setSupportedProperties(properties); +} + +void RouteOnAttribute::onDynamicPropertyModified(const core::Property _property, + const core::Property _property) { + + // Update the routing table when routes are added via dynamic properties. + route_properties_[new_property.getName()] = new_property; + + std::set relationships; + + for (const auto : route_properties_) { +core::Relationship route_rel{route.first, "Dynamic route"}; +route_rels_[route.first] = route_rel; +relationships.insert(route_rel); +logger_->log_info("RouteOnAttribute registered route '%s' with expression '%s'", + route.first, + route.second.getValue()); + } + + relationships.insert(Unmatched); + relationships.insert(Failure); + setSupportedRelationships(relationships); --- End diff -- OK. ---
[GitHub] nifi issue #2430: NIFI-4809 - Implement a SiteToSiteMetricsReportingTask
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2430 Ugh that's true, I'm not a fan of dynamic keys. Since the schema would be generated by the reporting task, then we could create a schema for the example above, but then each flow file would have its own schema, and even worse, each flow file would have only one metric, so at that point it's not really conducive to record processing. As an alternative we could convert the output (before or after JSON conversion) to an altered spec with a consistent schema definition. For the Ambari reporting task, since Ambari is expecting this format, then fine; if we keep the same spec in this new reporting task for consistency, then I'd hope to see a template on the Wiki using the new reporting task with a JoltTransformJSON processor to do the aforementioned transformation, along with including an AvroSchemaRegistry that contains the schema definition for the files coming out of the JoltTransformJSON processor. This would allow us to keep a consistent (standard) defined format (albeit non-schema-friendly), but offer a well-known solution to prepare the data for record-aware processors. Thoughts? ---
[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381050#comment-16381050 ] ASF GitHub Bot commented on NIFI-4809: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2430 Ugh that's true, I'm not a fan of dynamic keys. Since the schema would be generated by the reporting task, then we could create a schema for the example above, but then each flow file would have its own schema, and even worse, each flow file would have only one metric, so at that point it's not really conducive to record processing. As an alternative we could convert the output (before or after JSON conversion) to an altered spec with a consistent schema definition. For the Ambari reporting task, since Ambari is expecting this format, then fine; if we keep the same spec in this new reporting task for consistency, then I'd hope to see a template on the Wiki using the new reporting task with a JoltTransformJSON processor to do the aforementioned transformation, along with including an AvroSchemaRegistry that contains the schema definition for the files coming out of the JoltTransformJSON processor. This would allow us to keep a consistent (standard) defined format (albeit non-schema-friendly), but offer a well-known solution to prepare the data for record-aware processors. Thoughts? > Implement a SiteToSiteMetricsReportingTask > -- > > Key: NIFI-4809 > URL: https://issues.apache.org/jira/browse/NIFI-4809 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > At the moment there is an AmbariReportingTask to send the NiFi-related > metrics of the host to the Ambari Metrics Service. In a multi-cluster > configuration, or when working with MiNiFi (Java) agents, it might not be > possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS > REST API. > To solve this problem, a solution would be to implement a > SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi > instance/cluster that will be able to publish the metrics into AMS (using > InvokeHTTP). This way, it is possible to have the metrics of all the > instances exposed in one AMS instance. > I propose to send the data formatted as we are doing right now in the Ambari > reporting task. If needed, it can be easily converted into another schema > using the record processors once received via S2S. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2493 @david-streamlio i'm supportive of doing non record ones for the cases you mention. Does Pulsar by any chance offer a true streaming API instead of byte[] then so handling the types of objects you describe is feasible? We are doing the memory warning annootations so that will help too but would be awesome if they do.. ---
[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask
[ https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381031#comment-16381031 ] ASF GitHub Bot commented on NIFI-4809: -- Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2430 Thanks for your comments @mattyb149 - I just pushed a commit that should address everything. Regarding the record approach you suggested. Even though I really like the idea, I'm not sure to see how to define a valid avro schema for the specification used by the Ambari collector API (https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification): json { "metrics": [ { "metricname": "AMBARI_METRICS.SmokeTest.FakeMetric", "appid": "amssmoketestfake", "hostname": "ambari20-5.c.pramod-thangali.internal", "timestamp": 1432075898000, "starttime": 1432075898000, "metrics": { "1432075898000": 0.963781711428, "1432075899000": 1432075898000 } } ] } How would we manage the 'metrics' part where field names are timestamps? > Implement a SiteToSiteMetricsReportingTask > -- > > Key: NIFI-4809 > URL: https://issues.apache.org/jira/browse/NIFI-4809 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > At the moment there is an AmbariReportingTask to send the NiFi-related > metrics of the host to the Ambari Metrics Service. In a multi-cluster > configuration, or when working with MiNiFi (Java) agents, it might not be > possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS > REST API. > To solve this problem, a solution would be to implement a > SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi > instance/cluster that will be able to publish the metrics into AMS (using > InvokeHTTP). This way, it is possible to have the metrics of all the > instances exposed in one AMS instance. > I propose to send the data formatted as we are doing right now in the Ambari > reporting task. If needed, it can be easily converted into another schema > using the record processors once received via S2S. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2430: NIFI-4809 - Implement a SiteToSiteMetricsReportingTask
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2430 Thanks for your comments @mattyb149 - I just pushed a commit that should address everything. Regarding the record approach you suggested. Even though I really like the idea, I'm not sure to see how to define a valid avro schema for the specification used by the Ambari collector API (https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification): json { "metrics": [ { "metricname": "AMBARI_METRICS.SmokeTest.FakeMetric", "appid": "amssmoketestfake", "hostname": "ambari20-5.c.pramod-thangali.internal", "timestamp": 1432075898000, "starttime": 1432075898000, "metrics": { "1432075898000": 0.963781711428, "1432075899000": 1432075898000 } } ] } How would we manage the 'metrics' part where field names are timestamps? ---
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2493 @david-streamlio HBase is an example of what @markap14 suggested. https://github.com/apache/nifi/tree/master/nifi-nar-bundles/nifi-standard-services and there is a new PR (#2498) currently opened that is adding a new service for HBase 2 without modifying anything in all the related HBase processors that are located in this bundle: https://github.com/apache/nifi/tree/master/nifi-nar-bundles/nifi-hbase-bundle ---
[jira] [Commented] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean
[ https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380931#comment-16380931 ] ASF GitHub Bot commented on NIFI-4901: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2486 Fortunately for all of us here, I don't have commit rights :) > Json to Avro using Record framework does not support union types with boolean > - > > Key: NIFI-4901 > URL: https://issues.apache.org/jira/browse/NIFI-4901 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 > Environment: ALL >Reporter: Gardella Juan Pablo >Priority: Major > Attachments: optiona-boolean.zip > > > Given the following valid Avro Schema: > {code} > { >"type":"record", >"name":"foo", >"fields":[ > { > "name":"isSwap", > "type":[ > "boolean", > "null" > ] > } >] > } > {code} > And the following JSON: > {code} > { > "isSwap": { > "boolean": true > } > } > {code} > When it is trying to be converted to Avro using ConvertRecord fails with: > {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed > a JSON object from input but failed to convert into a Record object with the > given schema}} > Attached a repository to reproduce the issue and also included the fix: > * Run {{mvn clean test}} to reproduce the issue. > * Run {{mvn clean test -Ppatch}} to test the fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2486: NIFI-4901 Json to Avro using Record framework does not sup...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2486 Fortunately for all of us here, I don't have commit rights :) ---
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2493 @markap14. I completely agree that is the best approach for handling text-based payloads into and out of Pulsar, and am starting to work on the record based processors now. However, I do see a need for keeping the non record based processors around to handle non-text payloads, such as image files, audio files, scientific data, etc. ---
[jira] [Commented] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean
[ https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380891#comment-16380891 ] ASF GitHub Bot commented on NIFI-4901: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2486 > Json to Avro using Record framework does not support union types with boolean > - > > Key: NIFI-4901 > URL: https://issues.apache.org/jira/browse/NIFI-4901 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 > Environment: ALL >Reporter: Gardella Juan Pablo >Priority: Major > Attachments: optiona-boolean.zip > > > Given the following valid Avro Schema: > {code} > { >"type":"record", >"name":"foo", >"fields":[ > { > "name":"isSwap", > "type":[ > "boolean", > "null" > ] > } >] > } > {code} > And the following JSON: > {code} > { > "isSwap": { > "boolean": true > } > } > {code} > When it is trying to be converted to Avro using ConvertRecord fails with: > {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed > a JSON object from input but failed to convert into a Record object with the > given schema}} > Attached a repository to reproduce the issue and also included the fix: > * Run {{mvn clean test}} to reproduce the issue. > * Run {{mvn clean test -Ppatch}} to test the fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2486: NIFI-4901 Json to Avro using Record framework does ...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2486 ---
[jira] [Commented] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean
[ https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380890#comment-16380890 ] ASF subversion and git services commented on NIFI-4901: --- Commit eb844d8c6f7fa9b9704d9480b6340bb824dcb667 in nifi's branch refs/heads/master from [~markap14] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=eb844d8 ] NIFI-4901: This ticket was marked invalid by the creator. This closes #2486. > Json to Avro using Record framework does not support union types with boolean > - > > Key: NIFI-4901 > URL: https://issues.apache.org/jira/browse/NIFI-4901 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 > Environment: ALL >Reporter: Gardella Juan Pablo >Priority: Major > Attachments: optiona-boolean.zip > > > Given the following valid Avro Schema: > {code} > { >"type":"record", >"name":"foo", >"fields":[ > { > "name":"isSwap", > "type":[ > "boolean", > "null" > ] > } >] > } > {code} > And the following JSON: > {code} > { > "isSwap": { > "boolean": true > } > } > {code} > When it is trying to be converted to Avro using ConvertRecord fails with: > {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed > a JSON object from input but failed to convert into a Record object with the > given schema}} > Attached a repository to reproduce the issue and also included the fix: > * Run {{mvn clean test}} to reproduce the issue. > * Run {{mvn clean test -Ppatch}} to test the fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-4880) Add the ability to utilize aliases in Avro to Avro record conversion
[ https://issues.apache.org/jira/browse/NIFI-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFI-4880. -- Resolution: Fixed Fix Version/s: 1.6.0 > Add the ability to utilize aliases in Avro to Avro record conversion > > > Key: NIFI-4880 > URL: https://issues.apache.org/jira/browse/NIFI-4880 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.5.0 >Reporter: Derek Straka >Assignee: Derek Straka >Priority: Minor > Fix For: 1.6.0 > > > Currently, the Avro to Avro conversions will ignore fields that are not > mapped verbatim. In avro schemas, it is possible for fields to be aliased to > one another. It would be useful if a 1:1 mapping was not available, the > aliases for the field were searched to locate a prospective value and then > add the default. > > The functionality can be accomplished by adding some logic to > AvroTypeUtil::createAvroRecord. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4880) Add the ability to utilize aliases in Avro to Avro record conversion
[ https://issues.apache.org/jira/browse/NIFI-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380885#comment-16380885 ] ASF GitHub Bot commented on NIFI-4880: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2474 @derekstraka thanks that was a good catch! Code all looks good, unit tests pass, contrib-check is good. +1 I've merged this to master. Thanks again! > Add the ability to utilize aliases in Avro to Avro record conversion > > > Key: NIFI-4880 > URL: https://issues.apache.org/jira/browse/NIFI-4880 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.5.0 >Reporter: Derek Straka >Assignee: Derek Straka >Priority: Minor > > Currently, the Avro to Avro conversions will ignore fields that are not > mapped verbatim. In avro schemas, it is possible for fields to be aliased to > one another. It would be useful if a 1:1 mapping was not available, the > aliases for the field were searched to locate a prospective value and then > add the default. > > The functionality can be accomplished by adding some logic to > AvroTypeUtil::createAvroRecord. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4880) Add the ability to utilize aliases in Avro to Avro record conversion
[ https://issues.apache.org/jira/browse/NIFI-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380883#comment-16380883 ] ASF subversion and git services commented on NIFI-4880: --- Commit 44bc2d41d7d1c0140fc2daac5ce957641bf983b5 in nifi's branch refs/heads/master from [~derekstraka] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=44bc2d4 ] NIFI-4880: Add the ability to map record based on the aliases. This closes #2474 Signed-off-by: Derek StrakaSigned-off-by: Mark Payne > Add the ability to utilize aliases in Avro to Avro record conversion > > > Key: NIFI-4880 > URL: https://issues.apache.org/jira/browse/NIFI-4880 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.5.0 >Reporter: Derek Straka >Assignee: Derek Straka >Priority: Minor > > Currently, the Avro to Avro conversions will ignore fields that are not > mapped verbatim. In avro schemas, it is possible for fields to be aliased to > one another. It would be useful if a 1:1 mapping was not available, the > aliases for the field were searched to locate a prospective value and then > add the default. > > The functionality can be accomplished by adding some logic to > AvroTypeUtil::createAvroRecord. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4880) Add the ability to utilize aliases in Avro to Avro record conversion
[ https://issues.apache.org/jira/browse/NIFI-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380884#comment-16380884 ] ASF GitHub Bot commented on NIFI-4880: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2474 > Add the ability to utilize aliases in Avro to Avro record conversion > > > Key: NIFI-4880 > URL: https://issues.apache.org/jira/browse/NIFI-4880 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.5.0 >Reporter: Derek Straka >Assignee: Derek Straka >Priority: Minor > > Currently, the Avro to Avro conversions will ignore fields that are not > mapped verbatim. In avro schemas, it is possible for fields to be aliased to > one another. It would be useful if a 1:1 mapping was not available, the > aliases for the field were searched to locate a prospective value and then > add the default. > > The functionality can be accomplished by adding some logic to > AvroTypeUtil::createAvroRecord. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2474: NIFI-4880: Add the ability to map record based on the alia...
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2474 @derekstraka thanks that was a good catch! Code all looks good, unit tests pass, contrib-check is good. +1 I've merged this to master. Thanks again! ---
[GitHub] nifi pull request #2474: NIFI-4880: Add the ability to map record based on t...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2474 ---
[jira] [Commented] (NIFI-2630) Allow PublishJMS processor to create TextMessages
[ https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380873#comment-16380873 ] ASF GitHub Bot commented on NIFI-2630: -- Github user mosermw commented on a diff in the pull request: https://github.com/apache/nifi/pull/2458#discussion_r171354166 --- Diff: nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/ConsumeJMS.java --- @@ -136,9 +155,16 @@ relationships = Collections.unmodifiableSet(_relationships); } +@OnScheduled --- End diff -- Very good point, and I like the customValidate approach. I'll make the change. > Allow PublishJMS processor to create TextMessages > - > > Key: NIFI-2630 > URL: https://issues.apache.org/jira/browse/NIFI-2630 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.0 >Reporter: James Anderson >Assignee: Michael Moser >Priority: Minor > Labels: patch > Attachments: > 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch > > > Create a new configuration option for PublishJMS that allows the processor to > be configured to emit instances of TextMessages as well as BytesMessage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2458: NIFI-2630 Allow PublishJMS to send TextMessages
Github user mosermw commented on a diff in the pull request: https://github.com/apache/nifi/pull/2458#discussion_r171354166 --- Diff: nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/ConsumeJMS.java --- @@ -136,9 +155,16 @@ relationships = Collections.unmodifiableSet(_relationships); } +@OnScheduled --- End diff -- Very good point, and I like the customValidate approach. I'll make the change. ---
[jira] [Commented] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean
[ https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380870#comment-16380870 ] ASF GitHub Bot commented on NIFI-4901: -- Github user gardellajuanpablo commented on the issue: https://github.com/apache/nifi/pull/2486 @MikeThomsen the ticket is invalid. Please do not merge to master. > Json to Avro using Record framework does not support union types with boolean > - > > Key: NIFI-4901 > URL: https://issues.apache.org/jira/browse/NIFI-4901 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 > Environment: ALL >Reporter: Gardella Juan Pablo >Priority: Major > Attachments: optiona-boolean.zip > > > Given the following valid Avro Schema: > {code} > { >"type":"record", >"name":"foo", >"fields":[ > { > "name":"isSwap", > "type":[ > "boolean", > "null" > ] > } >] > } > {code} > And the following JSON: > {code} > { > "isSwap": { > "boolean": true > } > } > {code} > When it is trying to be converted to Avro using ConvertRecord fails with: > {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed > a JSON object from input but failed to convert into a Record object with the > given schema}} > Attached a repository to reproduce the issue and also included the fix: > * Run {{mvn clean test}} to reproduce the issue. > * Run {{mvn clean test -Ppatch}} to test the fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2486: NIFI-4901 Json to Avro using Record framework does not sup...
Github user gardellajuanpablo commented on the issue: https://github.com/apache/nifi/pull/2486 @MikeThomsen the ticket is invalid. Please do not merge to master. ---
[jira] [Updated] (NIFI-4912) Update jackson version to latest stable
[ https://issues.apache.org/jira/browse/NIFI-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Derek Straka updated NIFI-4912: --- Status: Patch Available (was: Open) > Update jackson version to latest stable > --- > > Key: NIFI-4912 > URL: https://issues.apache.org/jira/browse/NIFI-4912 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.5.0 >Reporter: Derek Straka >Assignee: Derek Straka >Priority: Major > Fix For: 1.6.0 > > > The current jackson version is out of date and contains several CVEs as well > as outstanding bugs. Update to the latest stable version which at the time > of writing is 2.9.4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean
[ https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380862#comment-16380862 ] ASF GitHub Bot commented on NIFI-4901: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2486 +1 LGTM. Built it and ran. Looks like a good solution. However, the unit test probably should be merged with an existing test class if there is one where it is a reasonable fit. > Json to Avro using Record framework does not support union types with boolean > - > > Key: NIFI-4901 > URL: https://issues.apache.org/jira/browse/NIFI-4901 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 > Environment: ALL >Reporter: Gardella Juan Pablo >Priority: Major > Attachments: optiona-boolean.zip > > > Given the following valid Avro Schema: > {code} > { >"type":"record", >"name":"foo", >"fields":[ > { > "name":"isSwap", > "type":[ > "boolean", > "null" > ] > } >] > } > {code} > And the following JSON: > {code} > { > "isSwap": { > "boolean": true > } > } > {code} > When it is trying to be converted to Avro using ConvertRecord fails with: > {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed > a JSON object from input but failed to convert into a Record object with the > given schema}} > Attached a repository to reproduce the issue and also included the fix: > * Run {{mvn clean test}} to reproduce the issue. > * Run {{mvn clean test -Ppatch}} to test the fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-2630) Allow PublishJMS processor to create TextMessages
[ https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380863#comment-16380863 ] ASF GitHub Bot commented on NIFI-2630: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2458#discussion_r171352969 --- Diff: nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/ConsumeJMS.java --- @@ -136,9 +155,16 @@ relationships = Collections.unmodifiableSet(_relationships); } +@OnScheduled --- End diff -- @mosermw thanks for the explanation! Makes sense. I would recommend reconsidering where it is implemented, though. Would look at either creating a new Validator in StandardValidators that allows EL without FlowFile attributes and evaluates that before validating the result, or otherwise just removing the validator all together and implementing it in customValidate... I would just prefer to see this done as part of the validation, rather than when the user attempts to start the processor. Make sense? > Allow PublishJMS processor to create TextMessages > - > > Key: NIFI-2630 > URL: https://issues.apache.org/jira/browse/NIFI-2630 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.0 >Reporter: James Anderson >Assignee: Michael Moser >Priority: Minor > Labels: patch > Attachments: > 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch > > > Create a new configuration option for PublishJMS that allows the processor to > be configured to emit instances of TextMessages as well as BytesMessage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2458: NIFI-2630 Allow PublishJMS to send TextMessages
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2458#discussion_r171352969 --- Diff: nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/ConsumeJMS.java --- @@ -136,9 +155,16 @@ relationships = Collections.unmodifiableSet(_relationships); } +@OnScheduled --- End diff -- @mosermw thanks for the explanation! Makes sense. I would recommend reconsidering where it is implemented, though. Would look at either creating a new Validator in StandardValidators that allows EL without FlowFile attributes and evaluates that before validating the result, or otherwise just removing the validator all together and implementing it in customValidate... I would just prefer to see this done as part of the validation, rather than when the user attempts to start the processor. Make sense? ---
[GitHub] nifi issue #2486: NIFI-4901 Json to Avro using Record framework does not sup...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2486 +1 LGTM. Built it and ran. Looks like a good solution. However, the unit test probably should be merged with an existing test class if there is one where it is a reasonable fit. ---
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2493 @david-streamlio the idea behind the record-oriented processors is that they make building your flow in NiFi much easier. Without that, you end up having to split your data up into tons of FlowFiles and then push each one individually to Pulsar. So you would have to use a SplitText, SplitJson, SplitAvro, etc. type of processor if the data is already 'batched together.' But if we had a PublishPulsarRecord, you can skip having to split the data up. It turns out that splitting the data up becomes quite expensive because instead of a single FlowFile containing 10,000 records you now have 10,000 FlowFiles, each with their own attributes, their own Provenance events, etc. So the record-oriented processors allow the flow to be much more efficient, and they also allow easy conversion, validation, etc. so the flows are also easier to build and maintain. So even without a schema registry integrated into Pulsar, the record-oriented approach is very helpful for the users building the flow. ---
[jira] [Commented] (NIFI-4916) Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes
[ https://issues.apache.org/jira/browse/NIFI-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380818#comment-16380818 ] ASF GitHub Bot commented on NIFI-4916: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2500 +1 LGTM. Build it and unit tests ran. Japanese build succeeded on Travis CI. I think it's ready to merge. > Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes > - > > Key: NIFI-4916 > URL: https://issues.apache.org/jira/browse/NIFI-4916 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 > Environment: NiFi 1.5.0 >Reporter: Fabio Coutinho >Assignee: Pierre Villard >Priority: Major > Attachments: ProvenanceCsvFlowFile.png, ProvenanceXlsFlowFile.png > > > When converting a flowfile containing an XLS file to CSV, the newly generated > flowfiles do not inherit the attributes from the original one. > Without the original flowfile's attributes, important information retrieved > before conversion (for example, file metadata) cannot be used after the file > is converted. I have attached 2 image files showing the attributes before and > after conversion. Please note that the input file has a lot of metadata > retrieved from Amazon S3 that does not exist on the new flowfile. > I believe that like most other NiFi processors, the original attributes > should be copied to new flowfiles. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2500: NIFI-4916 - ConvertExcelToCSVProcessor inherit parent attr...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2500 +1 LGTM. Build it and unit tests ran. Japanese build succeeded on Travis CI. I think it's ready to merge. ---
[jira] [Assigned] (NIFI-4911) NiFi CompressContent Snappy incompatible behavior with Spark
[ https://issues.apache.org/jira/browse/NIFI-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Owens reassigned NIFI-4911: Assignee: (was: Mark Owens) > NiFi CompressContent Snappy incompatible behavior with Spark > > > Key: NIFI-4911 > URL: https://issues.apache.org/jira/browse/NIFI-4911 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.2.0 > Environment: HDF 3.0.2 running on Centos >Reporter: Nabeel Sarwar >Priority: Major > > The CompressContent processor uses the SnappyOutputStream class from > snappy-java project. As listed on > [https://github.com/xerial/snappy-java|https://github.com/xerial/snappy-java,] > this output will be incompatible with > org.apache.hadoop.io.compress.SnappyCodec used for default in spark. When you > try to read snappy files produced by this processor from Spark, you will get > an empty dataframe. > One can deal with the data in Spark by using the SnappyInputStream on the raw > files and not dealing with the SnappyCodec in spark, but it is not obvious at > first glance why the default doesn't work. > Is there a way to add HadoopCompatibleSnappy as an option like Snappy Framed > is offered? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-397) Implement RouteOnAttribute
[ https://issues.apache.org/jira/browse/MINIFICPP-397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380793#comment-16380793 ] ASF GitHub Bot commented on MINIFICPP-397: -- Github user achristianson commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/268#discussion_r171338847 --- Diff: libminifi/src/processors/RouteOnAttribute.cpp --- @@ -0,0 +1,107 @@ +/** + * @file RouteOnAttribute.cpp + * RouteOnAttribute class implementation + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "processors/RouteOnAttribute.h" + +#include +#include +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace processors { + +core::Relationship RouteOnAttribute::Unmatched( +"unmatched", +"Files which do not match any expression are routed here"); +core::Relationship RouteOnAttribute::Failure( +"failure", +"Failed files are transferred to failure"); + +void RouteOnAttribute::initialize() { + std::set properties; + setSupportedProperties(properties); +} + +void RouteOnAttribute::onDynamicPropertyModified(const core::Property _property, + const core::Property _property) { + + // Update the routing table when routes are added via dynamic properties. + route_properties_[new_property.getName()] = new_property; + + std::set relationships; + + for (const auto : route_properties_) { +core::Relationship route_rel{route.first, "Dynamic route"}; +route_rels_[route.first] = route_rel; +relationships.insert(route_rel); +logger_->log_info("RouteOnAttribute registered route '%s' with expression '%s'", + route.first, + route.second.getValue()); + } + + relationships.insert(Unmatched); + relationships.insert(Failure); + setSupportedRelationships(relationships); --- End diff -- Since we don't support changing of relationships if isRunning() is true, a call to update properties/relationships will not go through and the failure will be logged. I think that's the behavior we want. > Implement RouteOnAttribute > -- > > Key: MINIFICPP-397 > URL: https://issues.apache.org/jira/browse/MINIFICPP-397 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > RouteOnAttribute is notably missing from MiNiFi - C++ and should be > implemented. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #268: MINIFICPP-397 Added implementation of Rou...
Github user achristianson commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/268#discussion_r171338847 --- Diff: libminifi/src/processors/RouteOnAttribute.cpp --- @@ -0,0 +1,107 @@ +/** + * @file RouteOnAttribute.cpp + * RouteOnAttribute class implementation + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "processors/RouteOnAttribute.h" + +#include +#include +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace processors { + +core::Relationship RouteOnAttribute::Unmatched( +"unmatched", +"Files which do not match any expression are routed here"); +core::Relationship RouteOnAttribute::Failure( +"failure", +"Failed files are transferred to failure"); + +void RouteOnAttribute::initialize() { + std::set properties; + setSupportedProperties(properties); +} + +void RouteOnAttribute::onDynamicPropertyModified(const core::Property _property, + const core::Property _property) { + + // Update the routing table when routes are added via dynamic properties. + route_properties_[new_property.getName()] = new_property; + + std::set relationships; + + for (const auto : route_properties_) { +core::Relationship route_rel{route.first, "Dynamic route"}; +route_rels_[route.first] = route_rel; +relationships.insert(route_rel); +logger_->log_info("RouteOnAttribute registered route '%s' with expression '%s'", + route.first, + route.second.getValue()); + } + + relationships.insert(Unmatched); + relationships.insert(Failure); + setSupportedRelationships(relationships); --- End diff -- Since we don't support changing of relationships if isRunning() is true, a call to update properties/relationships will not go through and the failure will be logged. I think that's the behavior we want. ---
[jira] [Updated] (NIFI-4916) Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes
[ https://issues.apache.org/jira/browse/NIFI-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-4916: - Assignee: Pierre Villard Status: Patch Available (was: Open) > Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes > - > > Key: NIFI-4916 > URL: https://issues.apache.org/jira/browse/NIFI-4916 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 > Environment: NiFi 1.5.0 >Reporter: Fabio Coutinho >Assignee: Pierre Villard >Priority: Major > Attachments: ProvenanceCsvFlowFile.png, ProvenanceXlsFlowFile.png > > > When converting a flowfile containing an XLS file to CSV, the newly generated > flowfiles do not inherit the attributes from the original one. > Without the original flowfile's attributes, important information retrieved > before conversion (for example, file metadata) cannot be used after the file > is converted. I have attached 2 image files showing the attributes before and > after conversion. Please note that the input file has a lot of metadata > retrieved from Amazon S3 that does not exist on the new flowfile. > I believe that like most other NiFi processors, the original attributes > should be copied to new flowfiles. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4916) Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes
[ https://issues.apache.org/jira/browse/NIFI-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380699#comment-16380699 ] ASF GitHub Bot commented on NIFI-4916: -- GitHub user pvillard31 opened a pull request: https://github.com/apache/nifi/pull/2500 NIFI-4916 - ConvertExcelToCSVProcessor inherit parent attributes Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/pvillard31/nifi NIFI-4916 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2500.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2500 commit b58180cf96ad93a0e5b6584f9fd1bf2ce40fbd07 Author: Pierre VillardDate: 2018-02-28T17:22:14Z NIFI-4916 - ConvertExcelToCSVProcessor inherit parent attributes > Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes > - > > Key: NIFI-4916 > URL: https://issues.apache.org/jira/browse/NIFI-4916 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 > Environment: NiFi 1.5.0 >Reporter: Fabio Coutinho >Priority: Major > Attachments: ProvenanceCsvFlowFile.png, ProvenanceXlsFlowFile.png > > > When converting a flowfile containing an XLS file to CSV, the newly generated > flowfiles do not inherit the attributes from the original one. > Without the original flowfile's attributes, important information retrieved > before conversion (for example, file metadata) cannot be used after the file > is converted. I have attached 2 image files showing the attributes before and > after conversion. Please note that the input file has a lot of metadata > retrieved from Amazon S3 that does not exist on the new flowfile. > I believe that like most other NiFi processors, the original attributes > should be copied to new flowfiles. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2500: NIFI-4916 - ConvertExcelToCSVProcessor inherit pare...
GitHub user pvillard31 opened a pull request: https://github.com/apache/nifi/pull/2500 NIFI-4916 - ConvertExcelToCSVProcessor inherit parent attributes Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/pvillard31/nifi NIFI-4916 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2500.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2500 commit b58180cf96ad93a0e5b6584f9fd1bf2ce40fbd07 Author: Pierre VillardDate: 2018-02-28T17:22:14Z NIFI-4916 - ConvertExcelToCSVProcessor inherit parent attributes ---
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380659#comment-16380659 ] ASF GitHub Bot commented on NIFI-4914: -- Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2493 I created a separate JIRA for record-based processors.https://issues.apache.org/jira/browse/NIFI-4914, but would like to get these processors into the release as well. Primarily because Pulsar doesn't currently have schemas or a schema registry, so writing records to Pulsar doesn't help the downstream consumers much. Unless they use the ConsumePulsarRecord processor. > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Fix For: 1.6.0 > > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2493 I created a separate JIRA for record-based processors.https://issues.apache.org/jira/browse/NIFI-4914, but would like to get these processors into the release as well. Primarily because Pulsar doesn't currently have schemas or a schema registry, so writing records to Pulsar doesn't help the downstream consumers much. Unless they use the ConsumePulsarRecord processor. ---
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2493 @david-streamlio @markap14 i'd recommend we dont even bother with non record based producers/consumers with Pulsar. ---
[jira] [Closed] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean
[ https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gardella Juan Pablo closed NIFI-4901. - > Json to Avro using Record framework does not support union types with boolean > - > > Key: NIFI-4901 > URL: https://issues.apache.org/jira/browse/NIFI-4901 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 > Environment: ALL >Reporter: Gardella Juan Pablo >Priority: Major > Attachments: optiona-boolean.zip > > > Given the following valid Avro Schema: > {code} > { >"type":"record", >"name":"foo", >"fields":[ > { > "name":"isSwap", > "type":[ > "boolean", > "null" > ] > } >] > } > {code} > And the following JSON: > {code} > { > "isSwap": { > "boolean": true > } > } > {code} > When it is trying to be converted to Avro using ConvertRecord fails with: > {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed > a JSON object from input but failed to convert into a Record object with the > given schema}} > Attached a repository to reproduce the issue and also included the fix: > * Run {{mvn clean test}} to reproduce the issue. > * Run {{mvn clean test -Ppatch}} to test the fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4919) Document the nifi-toolkit-cli
Pierre Villard created NIFI-4919: Summary: Document the nifi-toolkit-cli Key: NIFI-4919 URL: https://issues.apache.org/jira/browse/NIFI-4919 Project: Apache NiFi Issue Type: Improvement Components: Documentation Website, Tools and Build Reporter: Pierre Villard During the PR review ([https://github.com/apache/nifi/pull/2477]), few comments have been made to provide the best possible documentation around the features provided by the new NiFi toolkit CLI. Using annotations to automatically generate the documentation may be the best approach to provide an exhaustive documentation around the available commands. In addition to that a dedicated section for the toolkit binaries would probably be a good idea in the NiFi documentation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4915) Add support for HBase 2.0.0
[ https://issues.apache.org/jira/browse/NIFI-4915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380632#comment-16380632 ] ASF GitHub Bot commented on NIFI-4915: -- Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2498 @MikeThomsen thanks for the heads up, i don't anticipate this one being able to be merged any time soon since it will probably still be a while before a GA 2.0.0 hbase-client is available, so we should have time to reconcile anything > Add support for HBase 2.0.0 > --- > > Key: NIFI-4915 > URL: https://issues.apache.org/jira/browse/NIFI-4915 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > > The HBase community is gearing up for their 2.0.0 release and currently has a > 2.0.0-beta-1 release out. We should provide a new HBaseClientService that > uses the 2.0.0 client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2498: [WIP] NIFI-4915 - Creating new nifi-hbase_2-client-service...
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2498 @MikeThomsen thanks for the heads up, i don't anticipate this one being able to be merged any time soon since it will probably still be a while before a GA 2.0.0 hbase-client is available, so we should have time to reconcile anything ---
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380595#comment-16380595 ] ASF subversion and git services commented on NIFI-4839: --- Commit fe71c18ec58c1b4b2971d32b8184dd0d2dbba402 in nifi's branch refs/heads/master from [~aperepel] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=fe71c18 ] NIFI-4839 - Support both public URLs and local files as inputs for import actions. - The handling of empty canvas got lost in the merge, causing errors with a new NiFi instance. - Broaden support for input, now supportes both local files _and_ any public URL with a schema recognized by Java runtime. > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4915) Add support for HBase 2.0.0
[ https://issues.apache.org/jira/browse/NIFI-4915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380621#comment-16380621 ] ASF GitHub Bot commented on NIFI-4915: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2498 @bbende I have a huge commit in the works for the existing 1.X functionality that adds visibility label support. You might want to at least eyeball that commit. It's in branch 4637 on my repo. > Add support for HBase 2.0.0 > --- > > Key: NIFI-4915 > URL: https://issues.apache.org/jira/browse/NIFI-4915 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > > The HBase community is gearing up for their 2.0.0 release and currently has a > 2.0.0-beta-1 release out. We should provide a new HBaseClientService that > uses the 2.0.0 client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2498: [WIP] NIFI-4915 - Creating new nifi-hbase_2-client-service...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2498 @bbende I have a huge commit in the works for the existing 1.X functionality that adds visibility label support. You might want to at least eyeball that commit. It's in branch 4637 on my repo. ---
[jira] [Commented] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean
[ https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380617#comment-16380617 ] Gardella Juan Pablo commented on NIFI-4901: --- Yes, please close it. Thanks! > Json to Avro using Record framework does not support union types with boolean > - > > Key: NIFI-4901 > URL: https://issues.apache.org/jira/browse/NIFI-4901 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 > Environment: ALL >Reporter: Gardella Juan Pablo >Priority: Major > Attachments: optiona-boolean.zip > > > Given the following valid Avro Schema: > {code} > { >"type":"record", >"name":"foo", >"fields":[ > { > "name":"isSwap", > "type":[ > "boolean", > "null" > ] > } >] > } > {code} > And the following JSON: > {code} > { > "isSwap": { > "boolean": true > } > } > {code} > When it is trying to be converted to Avro using ConvertRecord fails with: > {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed > a JSON object from input but failed to convert into a Record object with the > given schema}} > Attached a repository to reproduce the issue and also included the fix: > * Run {{mvn clean test}} to reproduce the issue. > * Run {{mvn clean test -Ppatch}} to test the fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication
[ https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380615#comment-16380615 ] ASF GitHub Bot commented on NIFI-4838: -- Github user MikeThomsen commented on a diff in the pull request: https://github.com/apache/nifi/pull/2448#discussion_r171302727 --- Diff: nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java --- @@ -129,26 +144,44 @@ .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) .build(); static final PropertyDescriptor RESULTS_PER_FLOWFILE = new PropertyDescriptor.Builder() -.name("results-per-flowfile") -.displayName("Results Per FlowFile") -.description("How many results to put into a flowfile at once. The whole body will be treated as a JSON array of results.") -.required(false) -.expressionLanguageSupported(true) -.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) -.build(); +.name("results-per-flowfile") +.displayName("Results Per FlowFile") +.description("How many results to put into a flowfile at once. The whole body will be treated as a JSON array of results.") +.required(false) +.expressionLanguageSupported(true) +.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) +.build(); +static final PropertyDescriptor ESTIMATE_PROGRESS = new PropertyDescriptor.Builder() +.name("estimate-progress") +.displayName("Estimate Progress") +.description("If enabled, a count query will be run first, using the configured query, and attributes will be added to each flowfile showing how far they are into the result set.") +.required(true) +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.allowableValues(GM_TRUE, GM_FALSE) +.defaultValue(GM_FALSE.getValue()) +.build(); +static final PropertyDescriptor PROGRESSIVE_COMMITS = new PropertyDescriptor.Builder() +.name("progressive-commits") +.displayName("Commit After Each Batch") --- End diff -- It works in coordination with the results per flowfile property. The idea is to emulate the ExecuteSQL where after a batch of X number has been built up in the processor and it sends the data to a flowfile, it commits. I'm tempted to change the batch size property's display name to be something like Query Fetch Size. I think Results Per Flowfile is probably even clearer than Batch Size for this. Thoughts? > Make GetMongo support multiple commits and give some progress indication > > > Key: NIFI-4838 > URL: https://issues.apache.org/jira/browse/NIFI-4838 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > > It shouldn't wait until the end to do a commit() call because the effect is > that GetMongo looks like it has hung to a user who is pulling a very large > data set. > It should also have an option for running a count query to get the current > approximate count of documents that would match the query and append an > attribute that indicates where a flowfile stands in the total result count. > Ex: > query.progress.point.start = 2500 > query.progress.point.end = 5000 > query.count.estimate = 17,568,231 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2448: NIFI-4838 Added configurable progressive commits to...
Github user MikeThomsen commented on a diff in the pull request: https://github.com/apache/nifi/pull/2448#discussion_r171302727 --- Diff: nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java --- @@ -129,26 +144,44 @@ .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) .build(); static final PropertyDescriptor RESULTS_PER_FLOWFILE = new PropertyDescriptor.Builder() -.name("results-per-flowfile") -.displayName("Results Per FlowFile") -.description("How many results to put into a flowfile at once. The whole body will be treated as a JSON array of results.") -.required(false) -.expressionLanguageSupported(true) -.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) -.build(); +.name("results-per-flowfile") +.displayName("Results Per FlowFile") +.description("How many results to put into a flowfile at once. The whole body will be treated as a JSON array of results.") +.required(false) +.expressionLanguageSupported(true) +.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) +.build(); +static final PropertyDescriptor ESTIMATE_PROGRESS = new PropertyDescriptor.Builder() +.name("estimate-progress") +.displayName("Estimate Progress") +.description("If enabled, a count query will be run first, using the configured query, and attributes will be added to each flowfile showing how far they are into the result set.") +.required(true) +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.allowableValues(GM_TRUE, GM_FALSE) +.defaultValue(GM_FALSE.getValue()) +.build(); +static final PropertyDescriptor PROGRESSIVE_COMMITS = new PropertyDescriptor.Builder() +.name("progressive-commits") +.displayName("Commit After Each Batch") --- End diff -- It works in coordination with the results per flowfile property. The idea is to emulate the ExecuteSQL where after a batch of X number has been built up in the processor and it sends the data to a flowfile, it commits. I'm tempted to change the batch size property's display name to be something like Query Fetch Size. I think Results Per Flowfile is probably even clearer than Batch Size for this. Thoughts? ---
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380604#comment-16380604 ] ASF subversion and git services commented on NIFI-4839: --- Commit 2fd24b78e6883b9c52ca53a626db17957a066276 in nifi's branch refs/heads/master from [~aperepel] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=2fd24b7 ] NIFI-4839 - The "Disabled" column had incorrect size and skewed the header > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.6.0 > > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380600#comment-16380600 ] ASF subversion and git services commented on NIFI-4839: --- Commit b68eebd4293bc921148d933ad94d890c388ce4d0 in nifi's branch refs/heads/master from [~bbende] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=b68eebd ] NIFI-4839 - Added abbreviation in simple output for name, description, and comments - Refactored so that commands produce a result which can then be written or used - Added support for back-referencing results, initially prototyped by Andrew Grande - Fixed dynamic table layout when writing simple results - Added a new command group called 'demo' with a new 'quick-import' command - Fixes/improvements after previous refactoring - Created a reusable TableWriter and updating a few result classes to use it > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.6.0 > > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380597#comment-16380597 ] ASF subversion and git services commented on NIFI-4839: --- Commit 69367ff0bf9321c498973e64103e2a1477037383 in nifi's branch refs/heads/master from [~bbende] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=69367ff ] NIFI-4839 - Updating README and cleaning up descriptions and comments - Making registryClientId optional and auto selecting when only one is available - Added delete-bucket command - Added delete-flow command for registry > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.6.0 > > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-4839: - Resolution: Fixed Fix Version/s: 1.6.0 Status: Resolved (was: Patch Available) > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.6.0 > > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380608#comment-16380608 ] ASF GitHub Bot commented on NIFI-4839: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2477 > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.6.0 > > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380601#comment-16380601 ] ASF subversion and git services commented on NIFI-4839: --- Commit d1027879ebd606f8781b5024b1ca2ce3f4f54068 in nifi's branch refs/heads/master from [~aperepel] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=d102787 ] NIFI-4839 - Fixed handling of a connection object position - it doesn't have one and just returns null (calculated by the UI dynamically) > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.6.0 > > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2477: NIFI-4839 Adding CLI to nifi-toolkit
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2477 ---
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380603#comment-16380603 ] ASF subversion and git services commented on NIFI-4839: --- Commit 1911635a3a39ca0ee3e4c7163a0aa6d14c0b401f in nifi's branch refs/heads/master from [~bbende] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=1911635 ] NIFI-4839 - Switching standalone mode to default to simple output - Added pg-status command and improved output of pg-list - Setting up back-refs for pg-list and using table layout for pg-get-vars and pg-get-version - Only print usage on errors related to missing/incorrect options > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.6.0 > > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-4839: - Component/s: Tools and Build > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.6.0 > > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380605#comment-16380605 ] ASF subversion and git services commented on NIFI-4839: --- Commit 5041bea773c47b0b16b0a0e713d13c16f0cd66b6 in nifi's branch refs/heads/master from [~bbende] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=5041bea ] NIFI-4839 Improving back-ref support so that ReferenceResolver is passed the option being resolved - Adding ResolvedReference to encapsulate the results of resolving a back-reference. - Update README.md - Added OkResult for delete commands - Added sync-flow-versions and transfer-flow-version to registry commands - Returning appropriate status code when exiting standalone mode - Adding security section to README Signed-off-by: Pierre VillardThis closes #2477. > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.6.0 > > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380598#comment-16380598 ] ASF subversion and git services commented on NIFI-4839: --- Commit cc3c1b17142ef7767d429481008b2643e890f875 in nifi's branch refs/heads/master from [~aperepel] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=cc3c1b1 ] NIFI-4839 - Implemented nice dynamic table output for all list-XXX commands (in simple mode) - Better output formatting for 'registry list-buckets' - Implemented dynamic table formatting for 'registry list-XXX' commands - Implemented dynamic table formatting for 'nifi list-registry-clients' command - Implemented dynamic table formatting for 'nifi list-registry-clients' command - Better handling of non-null, but empty descriptions and commit messages > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.6.0 > > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2493 Thanks @markap14. This has been a great learning experience for me, and I am happy to contribute back to such a great project. I am still new to developing processors, so I really appreciate the feedback on my design and these are great points I hadn't considered. - I decided to go with a ControllerService, because I envision the user will only have 1 or 2 Pulsar clusters they are interacting with, therefore it made more sense to have a controller service that the user can configure once (Broker URL & SSL ControllerService), and pull Publishers and Consumers. from. I believe that the common usage pattern will be to have small number of Producers, that leverage the expression language to route the messages to multiple topics. I would expect Consumers to be configured to a single topic/subscription pair to feed data into the flow. - I have not given much thought to how we are going to handle Pulsar client evolution from these processors to be honest, as I don't expect the API to change drastically. Having said that, I think it is safe to at least associate these with the major version number of Pulsar just in case there is a major functionally shift between 1.x and 2.x. I am ok with naming these ConsumerPulsar_1_0. - Can you point me to an example of the "Client Service" Controller pattern you mention? I would like to examine that. ---
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380594#comment-16380594 ] ASF subversion and git services commented on NIFI-4839: --- Commit c1c808002c32008f31ec80664046c1857782e5a9 in nifi's branch refs/heads/master from [~bbende] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=c1c8080 ] NIFI-4839 - Modified how the process group box is calculated - Adding command to get the id of a registry client by name - Refactoring how results are written to support option of simple or json output - Added pg-set-var command - Added pg-list command - Added getDescription to every command and prints when asking for help on a command - Adding verbose out to help command to print description for every command > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380593#comment-16380593 ] ASF subversion and git services commented on NIFI-4839: --- Commit e3cc7bee057e7cbc3c7c852f170d5fa34749cdb5 in nifi's branch refs/heads/master from [~aperepel] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e3cc7be ] NIFI-4839 - Implemented auto-layout when importing the PG. Will find an available spot on a canvas which doesn't overlap with other components and is as close to the canvas center as possible. > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-415) Implement matches() Expression Language function
[ https://issues.apache.org/jira/browse/MINIFICPP-415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380582#comment-16380582 ] ASF GitHub Bot commented on MINIFICPP-415: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/270 > Implement matches() Expression Language function > > > Key: MINIFICPP-415 > URL: https://issues.apache.org/jira/browse/MINIFICPP-415 > Project: NiFi MiNiFi C++ > Issue Type: Sub-task >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > Implement matches() Expression Language function -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2499: Nifi-4918 JMS Connection Factory setting the dynami...
GitHub user jugi92 opened a pull request: https://github.com/apache/nifi/pull/2499 Nifi-4918 JMS Connection Factory setting the dynamic Properties wrong Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jugi92/nifi NIFI-4918 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2499.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2499 commit d3e62ed82acc856681d94da559060b5ce823e961 Author: Julian GimbelDate: 2018-02-28T12:39:32Z looping over several methods to try and fit the dynamic attribute. If failed, use first method and throw error if not working. commit 255c8478fb2b4957bcc49c41ca069149387f3b32 Author: Julian Gimbel Date: 2018-02-28T12:39:32Z NIFI-4918 JMS Connection Factory setting the dynamic Properties wrong. Now looping over several methods to try and fit the dynamic attribute. If failed, use first method and throw error if not working. commit 7446c9e5f447a4669a89bb87ec509fd70606b91f Author: Julian Gimbel Date: 2018-02-28T16:12:27Z Merge branch 'NIFI-4918' of https://github.com/jugi92/nifi into NIFI-4918 ---
[jira] [Commented] (NIFI-4839) Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows
[ https://issues.apache.org/jira/browse/NIFI-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380558#comment-16380558 ] ASF GitHub Bot commented on NIFI-4839: -- Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2477 +1. Just a remark: the version of commons-io does not need to be specified in the toolkit-cli pom file as it's already defined in the root pom. Will take care of it while merging. Thanks for the amazing job. > Create a CLI in NiFi Toolkit to interact with NIFi Registry/deploy flows > > > Key: NIFI-4839 > URL: https://issues.apache.org/jira/browse/NIFI-4839 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > > Now that we have NiFi Registry and the ability to import/upgrade flows in > NiFi, we should offer a command-line tool to interact with these REST > end-points. This could part of NiFi Toolkit and would help people potentially > automate some of these operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4901) Json to Avro using Record framework does not support union types with boolean
[ https://issues.apache.org/jira/browse/NIFI-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380557#comment-16380557 ] Mark Payne commented on NIFI-4901: -- Great, [~gardellajuanpablo], I'm glad that this appears to be a non-issue. Do you mind closing out the associated PR? > Json to Avro using Record framework does not support union types with boolean > - > > Key: NIFI-4901 > URL: https://issues.apache.org/jira/browse/NIFI-4901 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 > Environment: ALL >Reporter: Gardella Juan Pablo >Priority: Major > Attachments: optiona-boolean.zip > > > Given the following valid Avro Schema: > {code} > { >"type":"record", >"name":"foo", >"fields":[ > { > "name":"isSwap", > "type":[ > "boolean", > "null" > ] > } >] > } > {code} > And the following JSON: > {code} > { > "isSwap": { > "boolean": true > } > } > {code} > When it is trying to be converted to Avro using ConvertRecord fails with: > {{org.apache.nifi.serialization.MalformedRecordException: Successfully parsed > a JSON object from input but failed to convert into a Record object with the > given schema}} > Attached a repository to reproduce the issue and also included the fix: > * Run {{mvn clean test}} to reproduce the issue. > * Run {{mvn clean test -Ppatch}} to test the fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2477: NIFI-4839 Adding CLI to nifi-toolkit
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2477 +1. Just a remark: the version of commons-io does not need to be specified in the toolkit-cli pom file as it's already defined in the root pom. Will take care of it while merging. Thanks for the amazing job. ---
[GitHub] nifi pull request #2458: NIFI-2630 Allow PublishJMS to send TextMessages
Github user mosermw commented on a diff in the pull request: https://github.com/apache/nifi/pull/2458#discussion_r171296519 --- Diff: nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/ConsumeJMS.java --- @@ -136,9 +155,16 @@ relationships = Collections.unmodifiableSet(_relationships); } +@OnScheduled --- End diff -- Thanks for looking at this @markap14. The CHARACTER_SET_VALIDATOR essentially says a property value is valid if an EL expression returns a String. If someone sets it to ${system.charset} and that environment variable is set to "FOO", for example, then when ConsumeJMS receives a TextMessage it will throw an UnsupportedCharsetException at runtime. I thought I would include this method to give earlier warning. PublishJMS doesn't need this check, because the charset could be set in a flowfile attribute, and UnsupportedCharsetException would just cause the flowfile to go to 'failure'. If you still think the method is unnecessary, though, let me know and I can remove it. ---
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2493 Hey @david-streamlio this is very cool! I've been thinking about writing processors for interacting with Pulsar myself but haven't had a chance yet. Just a few things that we should think through a bit: - Re: Connection Pool in controller service vs. doing it in the processor: what makes sense here I think depends on how you expect to use it. If you expect to be creating several Pulsar processors with the same connection info, then a Controller Service makes sense. If you think the more common case will be a single instance of the Processor then configuring it in the Processor is probably easier for the user. I think both have their merits though, so I'm fine with either approach personally. - One concern that I have is that with the Kafka processors, we end up having to create a new copy of the processors with pretty much each release of Kafka, so that we can take advantage of the new features. Have you considered how you see this evolving as more versions of Pulsar are released? There are two approaches that we often see with NiFi. One is to create a new processor for each new version as we did with Kafka. The other is to have a "Client Service" Controller service. It would then have methods like publish(FlowFile), consume() or something like that. Then there is only a single ConsumePulsar processor and a single PublishPulsar processor. Each is then just configured with the controller service that handles interacting with Pulsar directly. Either approach is okay, I think. But we should probably think about naming at least - does it make sense to name these ConsumePulsar_1_20 or ConsumePulsar_1_0 or something of that nature? I think it's best to figure this part out before the initial release because it can then become confusing if we have processors like ConsumePulsar and ConsumePulsar_1_35 for instance. Thoughts? ---
[jira] [Commented] (NIFI-2630) Allow PublishJMS processor to create TextMessages
[ https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380540#comment-16380540 ] ASF GitHub Bot commented on NIFI-2630: -- Github user mosermw commented on a diff in the pull request: https://github.com/apache/nifi/pull/2458#discussion_r171296519 --- Diff: nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/ConsumeJMS.java --- @@ -136,9 +155,16 @@ relationships = Collections.unmodifiableSet(_relationships); } +@OnScheduled --- End diff -- Thanks for looking at this @markap14. The CHARACTER_SET_VALIDATOR essentially says a property value is valid if an EL expression returns a String. If someone sets it to ${system.charset} and that environment variable is set to "FOO", for example, then when ConsumeJMS receives a TextMessage it will throw an UnsupportedCharsetException at runtime. I thought I would include this method to give earlier warning. PublishJMS doesn't need this check, because the charset could be set in a flowfile attribute, and UnsupportedCharsetException would just cause the flowfile to go to 'failure'. If you still think the method is unnecessary, though, let me know and I can remove it. > Allow PublishJMS processor to create TextMessages > - > > Key: NIFI-2630 > URL: https://issues.apache.org/jira/browse/NIFI-2630 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.0 >Reporter: James Anderson >Assignee: Michael Moser >Priority: Minor > Labels: patch > Attachments: > 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch > > > Create a new configuration option for PublishJMS that allows the processor to > be configured to emit instances of TextMessages as well as BytesMessage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2493: Added Pulsar processors and Controller Service
Github user david-streamlio commented on a diff in the pull request: https://github.com/apache/nifi/pull/2493#discussion_r171295857 --- Diff: nifi-nar-bundles/nifi-pulsar-client-services/nifi-pulsar-client-service-api/src/main/java/org/apache/nifi/pulsar/PulsarClientPool.java --- @@ -0,0 +1,47 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.pulsar; + +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.controller.ControllerService; +import org.apache.nifi.pulsar.pool.ResourcePool; + + +@Tags({"Pulsar"}) +@CapabilityDescription("Provides the ability to create Pulsar Producer / Consumer instances on demand, based on the configuration." + + "properties defined") +public interface PulsarClientPool extends ControllerService { + + /* --- End diff -- Will do. ---
[GitHub] nifi pull request #2493: Added Pulsar processors and Controller Service
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2493#discussion_r171294580 --- Diff: nifi-nar-bundles/nifi-pulsar-client-services/nifi-pulsar-client-service-api/src/main/java/org/apache/nifi/pulsar/PulsarClientPool.java --- @@ -0,0 +1,47 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.pulsar; + +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.controller.ControllerService; +import org.apache.nifi.pulsar.pool.ResourcePool; + + +@Tags({"Pulsar"}) +@CapabilityDescription("Provides the ability to create Pulsar Producer / Consumer instances on demand, based on the configuration." + + "properties defined") +public interface PulsarClientPool extends ControllerService { + + /* --- End diff -- Can probably get rid of these lines that are commented out ---
[jira] [Commented] (MINIFICPP-397) Implement RouteOnAttribute
[ https://issues.apache.org/jira/browse/MINIFICPP-397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380528#comment-16380528 ] ASF GitHub Bot commented on MINIFICPP-397: -- Github user minifirocks commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/268#discussion_r171293683 --- Diff: libminifi/src/processors/RouteOnAttribute.cpp --- @@ -0,0 +1,107 @@ +/** + * @file RouteOnAttribute.cpp + * RouteOnAttribute class implementation + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "processors/RouteOnAttribute.h" + +#include +#include +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace processors { + +core::Relationship RouteOnAttribute::Unmatched( +"unmatched", +"Files which do not match any expression are routed here"); +core::Relationship RouteOnAttribute::Failure( +"failure", +"Failed files are transferred to failure"); + +void RouteOnAttribute::initialize() { + std::set properties; + setSupportedProperties(properties); +} + +void RouteOnAttribute::onDynamicPropertyModified(const core::Property _property, + const core::Property _property) { + + // Update the routing table when routes are added via dynamic properties. + route_properties_[new_property.getName()] = new_property; + + std::set relationships; + + for (const auto : route_properties_) { +core::Relationship route_rel{route.first, "Dynamic route"}; +route_rels_[route.first] = route_rel; +relationships.insert(route_rel); +logger_->log_info("RouteOnAttribute registered route '%s' with expression '%s'", + route.first, + route.second.getValue()); + } + + relationships.insert(Unmatched); + relationships.insert(Failure); + setSupportedRelationships(relationships); --- End diff -- bool Connectable::setSupportedRelationships(std::set relationships) { if (isRunning()) { logger_->log_warn("Can not set processor supported relationship while the process %s is running", name_); return false; } what if we do the onDynamicPropertyModified while the processor is running > Implement RouteOnAttribute > -- > > Key: MINIFICPP-397 > URL: https://issues.apache.org/jira/browse/MINIFICPP-397 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > RouteOnAttribute is notably missing from MiNiFi - C++ and should be > implemented. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #268: MINIFICPP-397 Added implementation of Rou...
Github user minifirocks commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/268#discussion_r171293683 --- Diff: libminifi/src/processors/RouteOnAttribute.cpp --- @@ -0,0 +1,107 @@ +/** + * @file RouteOnAttribute.cpp + * RouteOnAttribute class implementation + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "processors/RouteOnAttribute.h" + +#include +#include +#include + +namespace org { +namespace apache { +namespace nifi { +namespace minifi { +namespace processors { + +core::Relationship RouteOnAttribute::Unmatched( +"unmatched", +"Files which do not match any expression are routed here"); +core::Relationship RouteOnAttribute::Failure( +"failure", +"Failed files are transferred to failure"); + +void RouteOnAttribute::initialize() { + std::set properties; + setSupportedProperties(properties); +} + +void RouteOnAttribute::onDynamicPropertyModified(const core::Property _property, + const core::Property _property) { + + // Update the routing table when routes are added via dynamic properties. + route_properties_[new_property.getName()] = new_property; + + std::set relationships; + + for (const auto : route_properties_) { +core::Relationship route_rel{route.first, "Dynamic route"}; +route_rels_[route.first] = route_rel; +relationships.insert(route_rel); +logger_->log_info("RouteOnAttribute registered route '%s' with expression '%s'", + route.first, + route.second.getValue()); + } + + relationships.insert(Unmatched); + relationships.insert(Failure); + setSupportedRelationships(relationships); --- End diff -- bool Connectable::setSupportedRelationships(std::set relationships) { if (isRunning()) { logger_->log_warn("Can not set processor supported relationship while the process %s is running", name_); return false; } what if we do the onDynamicPropertyModified while the processor is running ---
[jira] [Created] (NIFI-4918) JMS Connection Factory setting the dynamic Properties wrong
Julian Gimbel created NIFI-4918: --- Summary: JMS Connection Factory setting the dynamic Properties wrong Key: NIFI-4918 URL: https://issues.apache.org/jira/browse/NIFI-4918 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Affects Versions: 1.5.0, 1.4.0, 1.3.0 Reporter: Julian Gimbel When trying to set the Property setSSLTrustedCertificate for the tibco jms Connection Factory the process will sometimes fail, because this Method is implemented three times and accepts different parameters. Therefor we should implement a fix that checks through the methods if one of it requires the type of parameter that was provided by the user. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2493: Added Pulsar processors and Controller Service
Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2493 When I started writing these processors, I used the Kafka ones as a model. However, I felt that it would be easier to configure the Pulsar client once, such as SSLContextService integration, and then pool both the Producer and Consumers for re-use, similar to a connection pool for a database. ---
[jira] [Commented] (NIFI-2630) Allow PublishJMS processor to create TextMessages
[ https://issues.apache.org/jira/browse/NIFI-2630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380479#comment-16380479 ] ASF GitHub Bot commented on NIFI-2630: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2458#discussion_r171283791 --- Diff: nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/ConsumeJMS.java --- @@ -136,9 +155,16 @@ relationships = Collections.unmodifiableSet(_relationships); } +@OnScheduled --- End diff -- Hey @mosermw is this something we need to do? We have the Character Set Validator already in place so it should never get this far if it's not valid. > Allow PublishJMS processor to create TextMessages > - > > Key: NIFI-2630 > URL: https://issues.apache.org/jira/browse/NIFI-2630 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.6.0 >Reporter: James Anderson >Assignee: Michael Moser >Priority: Minor > Labels: patch > Attachments: > 0001-NIFI-2630-Allow-PublishJMS-processor-to-create-TextM.patch > > > Create a new configuration option for PublishJMS that allows the processor to > be configured to emit instances of TextMessages as well as BytesMessage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2458: NIFI-2630 Allow PublishJMS to send TextMessages
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2458#discussion_r171283791 --- Diff: nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/main/java/org/apache/nifi/jms/processors/ConsumeJMS.java --- @@ -136,9 +155,16 @@ relationships = Collections.unmodifiableSet(_relationships); } +@OnScheduled --- End diff -- Hey @mosermw is this something we need to do? We have the Character Set Validator already in place so it should never get this far if it's not valid. ---
[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication
[ https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380467#comment-16380467 ] ASF GitHub Bot commented on NIFI-4838: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2448#discussion_r171280456 --- Diff: nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java --- @@ -129,26 +144,44 @@ .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) .build(); static final PropertyDescriptor RESULTS_PER_FLOWFILE = new PropertyDescriptor.Builder() -.name("results-per-flowfile") -.displayName("Results Per FlowFile") -.description("How many results to put into a flowfile at once. The whole body will be treated as a JSON array of results.") -.required(false) -.expressionLanguageSupported(true) -.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) -.build(); +.name("results-per-flowfile") +.displayName("Results Per FlowFile") +.description("How many results to put into a flowfile at once. The whole body will be treated as a JSON array of results.") +.required(false) +.expressionLanguageSupported(true) +.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) +.build(); +static final PropertyDescriptor ESTIMATE_PROGRESS = new PropertyDescriptor.Builder() +.name("estimate-progress") +.displayName("Estimate Progress") +.description("If enabled, a count query will be run first, using the configured query, and attributes will be added to each flowfile showing how far they are into the result set.") +.required(true) +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.allowableValues(GM_TRUE, GM_FALSE) +.defaultValue(GM_FALSE.getValue()) +.build(); +static final PropertyDescriptor PROGRESSIVE_COMMITS = new PropertyDescriptor.Builder() +.name("progressive-commits") +.displayName("Commit After Each Batch") --- End diff -- I'm a little confused here about the term "batch". It doesn't seem directly related to the Batch Size property (since the latter is kind of a server-side thing, like a JDBC "fetch size"?), and in the code a "batch" seems to refer to the number of files set in Results Per Flowfile. Can you explain a little more about what's going on with the progressive commits? If I have Results per Flowfile set to 100 and Batch Size set to 1000, would I get 10 flow files committed at once as once "batch"? Or is it always one commit per flowfile (if Commit After Each Batch is set)? > Make GetMongo support multiple commits and give some progress indication > > > Key: NIFI-4838 > URL: https://issues.apache.org/jira/browse/NIFI-4838 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > > It shouldn't wait until the end to do a commit() call because the effect is > that GetMongo looks like it has hung to a user who is pulling a very large > data set. > It should also have an option for running a count query to get the current > approximate count of documents that would match the query and append an > attribute that indicates where a flowfile stands in the total result count. > Ex: > query.progress.point.start = 2500 > query.progress.point.end = 5000 > query.count.estimate = 17,568,231 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2448: NIFI-4838 Added configurable progressive commits to...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2448#discussion_r171280456 --- Diff: nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/GetMongo.java --- @@ -129,26 +144,44 @@ .addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) .build(); static final PropertyDescriptor RESULTS_PER_FLOWFILE = new PropertyDescriptor.Builder() -.name("results-per-flowfile") -.displayName("Results Per FlowFile") -.description("How many results to put into a flowfile at once. The whole body will be treated as a JSON array of results.") -.required(false) -.expressionLanguageSupported(true) -.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) -.build(); +.name("results-per-flowfile") +.displayName("Results Per FlowFile") +.description("How many results to put into a flowfile at once. The whole body will be treated as a JSON array of results.") +.required(false) +.expressionLanguageSupported(true) +.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) +.build(); +static final PropertyDescriptor ESTIMATE_PROGRESS = new PropertyDescriptor.Builder() +.name("estimate-progress") +.displayName("Estimate Progress") +.description("If enabled, a count query will be run first, using the configured query, and attributes will be added to each flowfile showing how far they are into the result set.") +.required(true) +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.allowableValues(GM_TRUE, GM_FALSE) +.defaultValue(GM_FALSE.getValue()) +.build(); +static final PropertyDescriptor PROGRESSIVE_COMMITS = new PropertyDescriptor.Builder() +.name("progressive-commits") +.displayName("Commit After Each Batch") --- End diff -- I'm a little confused here about the term "batch". It doesn't seem directly related to the Batch Size property (since the latter is kind of a server-side thing, like a JDBC "fetch size"?), and in the code a "batch" seems to refer to the number of files set in Results Per Flowfile. Can you explain a little more about what's going on with the progressive commits? If I have Results per Flowfile set to 100 and Batch Size set to 1000, would I get 10 flow files committed at once as once "batch"? Or is it always one commit per flowfile (if Commit After Each Batch is set)? ---
[jira] [Commented] (NIFI-4916) Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes
[ https://issues.apache.org/jira/browse/NIFI-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380460#comment-16380460 ] Pierre Villard commented on NIFI-4916: -- Sure, no problem at all. > Flowfiles created by ConvertExcelToCSVProcessor do not inherit attributes > - > > Key: NIFI-4916 > URL: https://issues.apache.org/jira/browse/NIFI-4916 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 > Environment: NiFi 1.5.0 >Reporter: Fabio Coutinho >Priority: Major > Attachments: ProvenanceCsvFlowFile.png, ProvenanceXlsFlowFile.png > > > When converting a flowfile containing an XLS file to CSV, the newly generated > flowfiles do not inherit the attributes from the original one. > Without the original flowfile's attributes, important information retrieved > before conversion (for example, file metadata) cannot be used after the file > is converted. I have attached 2 image files showing the attributes before and > after conversion. Please note that the input file has a lot of metadata > retrieved from Amazon S3 that does not exist on the new flowfile. > I believe that like most other NiFi processors, the original attributes > should be copied to new flowfiles. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication
[ https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-4838: --- Status: Patch Available (was: Open) > Make GetMongo support multiple commits and give some progress indication > > > Key: NIFI-4838 > URL: https://issues.apache.org/jira/browse/NIFI-4838 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > > It shouldn't wait until the end to do a commit() call because the effect is > that GetMongo looks like it has hung to a user who is pulling a very large > data set. > It should also have an option for running a count query to get the current > approximate count of documents that would match the query and append an > attribute that indicates where a flowfile stands in the total result count. > Ex: > query.progress.point.start = 2500 > query.progress.point.end = 5000 > query.count.estimate = 17,568,231 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4838) Make GetMongo support multiple commits and give some progress indication
[ https://issues.apache.org/jira/browse/NIFI-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380422#comment-16380422 ] ASF GitHub Bot commented on NIFI-4838: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2448 @mattyb149 Rebased, and passes all tests (updated some unit tests too). > Make GetMongo support multiple commits and give some progress indication > > > Key: NIFI-4838 > URL: https://issues.apache.org/jira/browse/NIFI-4838 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > > It shouldn't wait until the end to do a commit() call because the effect is > that GetMongo looks like it has hung to a user who is pulling a very large > data set. > It should also have an option for running a count query to get the current > approximate count of documents that would match the query and append an > attribute that indicates where a flowfile stands in the total result count. > Ex: > query.progress.point.start = 2500 > query.progress.point.end = 5000 > query.count.estimate = 17,568,231 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2448: NIFI-4838 Added configurable progressive commits to GetMon...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2448 @mattyb149 Rebased, and passes all tests (updated some unit tests too). ---
[jira] [Commented] (NIFI-4876) Add Minimum Age Filter to ListS3
[ https://issues.apache.org/jira/browse/NIFI-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380410#comment-16380410 ] ASF GitHub Bot commented on NIFI-4876: -- Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/2491 Thanks, @pvillard31 > Add Minimum Age Filter to ListS3 > > > Key: NIFI-4876 > URL: https://issues.apache.org/jira/browse/NIFI-4876 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.5.0 >Reporter: James Wing >Assignee: James Wing >Priority: Minor > Fix For: 1.6.0 > > > ListS3 can experience difficulty reading the latest objects in a rapidly > changing S3 bucket due to the eventually consistent nature of S3. Much of > this difficulty might be avoided by ignoring objects until a minimum age, > even 30 seconds or 1 minute. I propose to add a Minimum Object Age feature > to ListS3, similar to the Minimum File Age in GetFile. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2491: NIFI-4876 Adding Min Object Age to ListS3
Github user jvwing commented on the issue: https://github.com/apache/nifi/pull/2491 Thanks, @pvillard31 ---
[jira] [Commented] (NIFI-4917) Create a new Controller Service for specifying Keytabs
[ https://issues.apache.org/jira/browse/NIFI-4917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380403#comment-16380403 ] Mark Payne commented on NIFI-4917: -- Is blocked by NIFI-4885 because it needs the more granular @Restricted annotation to indicate that the "access-keytab" permission is required. > Create a new Controller Service for specifying Keytabs > -- > > Key: NIFI-4917 > URL: https://issues.apache.org/jira/browse/NIFI-4917 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > > Currently, we have many processors that use keytabs for authenticating with > kerberos. These processors allow the user to specify the keytab and the > principal. However, in a multi-tenant environment, this can be dangerous. If > users are able to type in the name of any keytab, then they can use any > keytab that the user running nifi has access to. Additionally, they can use > any principal within that keytab. > Using the @Restricted annotation is not really enough because you need that > permission just to use PutHDFS, for example. But you shouldn't have access to > all Keytabs just because you need access to HDFS. NIFI-4885 provides the > ability to make these restrictions more granular. But we need the ability to > specify the keytab & principal external to the processors. This gives users > the ability to control who is able to specify the Keytabs & Principals that > are allowed to be referenced. Further, they can change permissions on those > Controller Services so that only the appropriate users can access them. > We would like to avoid completely removing the Keytab and Principal > properties in those processors for now, though, as it would make a lot of > users' flows now invalid and can be a pain to update. As a result, we should > allow either the Keytab/Principal properties to be referenced OR the > controller service. Additionally, we should allow an Environment Variable to > be set that will prevent use of the Keytab & Principal properties directly. > This allows an admin to enforce this rule when he/she chooses to do so > without immediately forcing a lot of property changes. Because Processors > themselves don't have access to nifi.properties we don't want to add a > property there. Also, System Properties are not a good idea because that > could very easily be changed via a script, etc. Environment Variables offer > the correct trade-offs, I believe, and can be easily configured within > bin/nifi-env.sh for most users and could be easily updated in the batch file > for Windows users as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005)