[jira] [Commented] (NIFI-8634) Create Controller Services that allows user to choose between Record Readers/Writers based on Expression Language
[ https://issues.apache.org/jira/browse/NIFI-8634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17356536#comment-17356536 ] Jon Kessler commented on NIFI-8634: --- I don't believe they evaluate ExpressionLanguage but the ReaderLookup and RecordSetWriterLookup controller services introduced by https://issues.apache.org/jira/browse/NIFI-5829 get you pretty close to this requirement. > Create Controller Services that allows user to choose between Record > Readers/Writers based on Expression Language > - > > Key: NIFI-8634 > URL: https://issues.apache.org/jira/browse/NIFI-8634 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Mark Payne >Priority: Major > > Typically when a user builds a flow, the user must choose a JSON Reader, CSV > Reader, JSON Writer, CSV Writer, etc. whatever makes sense for their use case. > The downside to this approach is that it makes it difficult to build a flow > is more generically distributable. > We should create a RecordReaderLookup service and a RecordWriterLookup > service. Each would have a single well-known property for the name of the > Record Reader/Writer service to use. User-defined properties would then be > used to define those Record Readers/Writers. > For example, one might configure RecordReaderLookup as such: > {code:java} > Selected Reader: #{DataFOrmat} > csv: CSVReader > json: JsonTreeReader > avro: AvroReader {code} > In this case, CSVReader, JsonTreeReader, and AvroReader are other Record > Reader Controller Services. Now, a parameter can be defined with the name > {{DataFormat}}. If that parameter has a value of {{csv}}, the CSVReader would > be used. If the parameter has a value of {{json}}, the JsonTreeReader would > be used, and so forth. > Same principle would be followed for the Record Writer. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-8523) Update secure ftp processors to allow restriction of algorithms, ciphers and message authentication codes
Jon Kessler created NIFI-8523: - Summary: Update secure ftp processors to allow restriction of algorithms, ciphers and message authentication codes Key: NIFI-8523 URL: https://issues.apache.org/jira/browse/NIFI-8523 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Affects Versions: 1.13.2 Reporter: Jon Kessler Assignee: Jon Kessler The SFTPTransfer class, which is used for SSH communications by the four secure ftp processors (GetSFTP, ListSFTP, PutSFTP, and FetchSFTP), uses a java library called net.schmizz.sshj. This library allows one to restrict what algorithms, ciphers and message authentication codes are used by the ssh client created by that library. However SFTPTransfer is hardcoded to use the DefaultConfig which uses all available options. I believe it would be beneficial to expose this as a matter of configuration via PropertyDescriptors so that if an operator chose to they could eliminate options that did not fit within their desired security posture. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-8148) Selecting field from array with QueryRecord routes to failure
[ https://issues.apache.org/jira/browse/NIFI-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler reassigned NIFI-8148: - Assignee: (was: Jon Kessler) > Selecting field from array with QueryRecord routes to failure > - > > Key: NIFI-8148 > URL: https://issues.apache.org/jira/browse/NIFI-8148 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Priority: Major > > Given the following JSON document coming into QueryRecord: > {{ {}} > {{"name": "John Doe",}} > {{"try": [}} > {{{}} > {{"workAddress": {}} > {{"number": "123",}} > {{"street": "5th Avenue",}} > {{"city": "New York",}} > {{"state": "NY",}} > {{"zip": "10020"}} > {{},}} > {{"homeAddress": {}} > {{"number": "456",}} > {{"street": "116th Avenue",}} > {{"city": "New York",}} > {{"state": "NY",}} > {{"zip": "11697"}} > {{}}} > {{}}} > {{]}} > {{}}} > When using a JSON Reader (inferred schema) and JSON Writer (inherit record > schema), we should be able to use the query: > SELECT RPATH(try, '/*/zip') AS zip > FROM FLOWFILE > The result should be two records, each consisting of a single field named > 'zip' that is of type String. > Currently, it throws an Exception and routes to failure. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8148) Selecting field from array with QueryRecord routes to failure
[ https://issues.apache.org/jira/browse/NIFI-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17282558#comment-17282558 ] Jon Kessler commented on NIFI-8148: --- [~markap14], will you please confirm that this is the exception you saw so that I know I'm on the right track? 60867 [pool-1-thread-1] ERROR org.apache.nifi.processors.standard.QueryRecord - QueryRecord[id=56c7553f-f299-4e5d-b0fc-a2fc6b2a99f7] Failed to write MapRecord[\{zip=[Ljava.lang.Object;@6de8ce3c}] with schema ["zip" : "RECORD"] as a JSON Object due to org.apache.nifi.serialization.record.util.IllegalTypeConversionException: Cannot convert value [[Ljava.lang.Object;@6de8ce3c] of type class [Ljava.lang.Object; to Record for field zip: org.apache.nifi.serialization.record.util.IllegalTypeConversionException: Cannot convert value [[Ljava.lang.Object;@6de8ce3c] of type class [Ljava.lang.Object; to Record for field zip org.apache.nifi.serialization.record.util.IllegalTypeConversionException: Cannot convert value [[Ljava.lang.Object;@6de8ce3c] of type class [Ljava.lang.Object; to Record for field zip at org.apache.nifi.serialization.record.util.DataTypeUtils.toRecord(DataTypeUtils.java:398) at org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:219) at org.apache.nifi.serialization.record.util.DataTypeUtils.convertType(DataTypeUtils.java:171) at org.apache.nifi.json.WriteJsonResult.writeValue(WriteJsonResult.java:327) at org.apache.nifi.json.WriteJsonResult.writeRecord(WriteJsonResult.java:199) at org.apache.nifi.json.WriteJsonResult.writeRecord(WriteJsonResult.java:148) at org.apache.nifi.serialization.AbstractRecordSetWriter.write(AbstractRecordSetWriter.java:59) at org.apache.nifi.serialization.AbstractRecordSetWriter.write(AbstractRecordSetWriter.java:52) at org.apache.nifi.processors.standard.QueryRecord$1.process(QueryRecord.java:347) > Selecting field from array with QueryRecord routes to failure > - > > Key: NIFI-8148 > URL: https://issues.apache.org/jira/browse/NIFI-8148 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Jon Kessler >Priority: Major > > Given the following JSON document coming into QueryRecord: > {{ {}} > {{"name": "John Doe",}} > {{"try": [}} > {{{}} > {{"workAddress": {}} > {{"number": "123",}} > {{"street": "5th Avenue",}} > {{"city": "New York",}} > {{"state": "NY",}} > {{"zip": "10020"}} > {{},}} > {{"homeAddress": {}} > {{"number": "456",}} > {{"street": "116th Avenue",}} > {{"city": "New York",}} > {{"state": "NY",}} > {{"zip": "11697"}} > {{}}} > {{}}} > {{]}} > {{}}} > When using a JSON Reader (inferred schema) and JSON Writer (inherit record > schema), we should be able to use the query: > SELECT RPATH(try, '/*/zip') AS zip > FROM FLOWFILE > The result should be two records, each consisting of a single field named > 'zip' that is of type String. > Currently, it throws an Exception and routes to failure. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-8148) Selecting field from array with QueryRecord routes to failure
[ https://issues.apache.org/jira/browse/NIFI-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler reassigned NIFI-8148: - Assignee: Jon Kessler > Selecting field from array with QueryRecord routes to failure > - > > Key: NIFI-8148 > URL: https://issues.apache.org/jira/browse/NIFI-8148 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Jon Kessler >Priority: Major > > Given the following JSON document coming into QueryRecord: > {{ {}} > {{"name": "John Doe",}} > {{"try": [}} > {{{}} > {{"workAddress": {}} > {{"number": "123",}} > {{"street": "5th Avenue",}} > {{"city": "New York",}} > {{"state": "NY",}} > {{"zip": "10020"}} > {{},}} > {{"homeAddress": {}} > {{"number": "456",}} > {{"street": "116th Avenue",}} > {{"city": "New York",}} > {{"state": "NY",}} > {{"zip": "11697"}} > {{}}} > {{}}} > {{]}} > {{}}} > When using a JSON Reader (inferred schema) and JSON Writer (inherit record > schema), we should be able to use the query: > SELECT RPATH(try, '/*/zip') AS zip > FROM FLOWFILE > The result should be two records, each consisting of a single field named > 'zip' that is of type String. > Currently, it throws an Exception and routes to failure. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8194) Extraneous WARN log messages about authentication protocols not being configured
[ https://issues.apache.org/jira/browse/NIFI-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler updated NIFI-8194: -- Status: Patch Available (was: In Progress) > Extraneous WARN log messages about authentication protocols not being > configured > > > Key: NIFI-8194 > URL: https://issues.apache.org/jira/browse/NIFI-8194 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.0 >Reporter: Mark Payne >Assignee: Jon Kessler >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > When running a secure instance (secured via TLS), each time that a user opens > a browser to a NiFi instance, the following logs are dumped into > `nifi-app.log`: > {code:java} > 2021-02-03 11:35:57,559 WARN [NiFi Web Server-25] > org.apache.nifi.web.api.AccessResource Kerberos ticket login not supported by > this NiFi. > 2021-02-03 11:35:57,616 WARN [NiFi Web Server-22] > org.apache.nifi.web.api.AccessResource OpenId Connect support is not > configured > 2021-02-03 11:35:57,624 WARN [NiFi Web Server-25] > org.apache.nifi.web.api.AccessResource SAML support is not configured {code} > These should probably go into the nifi-user.log instead of nifi-app.log. But > more importantly, the fact that they are not configured is very normal and > not worthy of a warning. It should be INFO level at max, probably DEBUG level. > It's unclear if these warnings were appearing before 1.13, but I think they > were in the user log instead of the app log. Could be mistaken about that, > though. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8194) Extraneous WARN log messages about authentication protocols not being configured
[ https://issues.apache.org/jira/browse/NIFI-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17279748#comment-17279748 ] Jon Kessler commented on NIFI-8194: --- [~markap14] On a fresh build from main, these log messages are appearing in the user log for me. That being said I noticed a fourth log message that I believe is in the same logical group as the other three that you mentioned and will include it in my PR. {noformat} 2021-02-05 15:03:57,187 WARN [main] o.a.n.w.s.o.StandardOidcIdentityProvider The OIDC provider is not configured or enabled {noformat} >From this code snippet: {noformat} @Override public void initializeProvider() { // attempt to process the oidc configuration if configured if (!properties.isOidcEnabled()) { logger.warn("The OIDC provider is not configured or enabled"); return; } {noformat} > Extraneous WARN log messages about authentication protocols not being > configured > > > Key: NIFI-8194 > URL: https://issues.apache.org/jira/browse/NIFI-8194 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.0 >Reporter: Mark Payne >Assignee: Jon Kessler >Priority: Major > > When running a secure instance (secured via TLS), each time that a user opens > a browser to a NiFi instance, the following logs are dumped into > `nifi-app.log`: > {code:java} > 2021-02-03 11:35:57,559 WARN [NiFi Web Server-25] > org.apache.nifi.web.api.AccessResource Kerberos ticket login not supported by > this NiFi. > 2021-02-03 11:35:57,616 WARN [NiFi Web Server-22] > org.apache.nifi.web.api.AccessResource OpenId Connect support is not > configured > 2021-02-03 11:35:57,624 WARN [NiFi Web Server-25] > org.apache.nifi.web.api.AccessResource SAML support is not configured {code} > These should probably go into the nifi-user.log instead of nifi-app.log. But > more importantly, the fact that they are not configured is very normal and > not worthy of a warning. It should be INFO level at max, probably DEBUG level. > It's unclear if these warnings were appearing before 1.13, but I think they > were in the user log instead of the app log. Could be mistaken about that, > though. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-8194) Extraneous WARN log messages about authentication protocols not being configured
[ https://issues.apache.org/jira/browse/NIFI-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler reassigned NIFI-8194: - Assignee: Jon Kessler > Extraneous WARN log messages about authentication protocols not being > configured > > > Key: NIFI-8194 > URL: https://issues.apache.org/jira/browse/NIFI-8194 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.13.0 >Reporter: Mark Payne >Assignee: Jon Kessler >Priority: Major > > When running a secure instance (secured via TLS), each time that a user opens > a browser to a NiFi instance, the following logs are dumped into > `nifi-app.log`: > {code:java} > 2021-02-03 11:35:57,559 WARN [NiFi Web Server-25] > org.apache.nifi.web.api.AccessResource Kerberos ticket login not supported by > this NiFi. > 2021-02-03 11:35:57,616 WARN [NiFi Web Server-22] > org.apache.nifi.web.api.AccessResource OpenId Connect support is not > configured > 2021-02-03 11:35:57,624 WARN [NiFi Web Server-25] > org.apache.nifi.web.api.AccessResource SAML support is not configured {code} > These should probably go into the nifi-user.log instead of nifi-app.log. But > more importantly, the fact that they are not configured is very normal and > not worthy of a warning. It should be INFO level at max, probably DEBUG level. > It's unclear if these warnings were appearing before 1.13, but I think they > were in the user log instead of the app log. Could be mistaken about that, > though. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-8180) Add default flowfile expiration period to nifi.properties
[ https://issues.apache.org/jira/browse/NIFI-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler resolved NIFI-8180. --- Resolution: Won't Do > Add default flowfile expiration period to nifi.properties > - > > Key: NIFI-8180 > URL: https://issues.apache.org/jira/browse/NIFI-8180 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.12.1 >Reporter: Jon Kessler >Assignee: Jon Kessler >Priority: Minor > Time Spent: 1h 20m > Remaining Estimate: 0h > > I have a use case where I would like a default flowfile expiration period for > each newly created FlowFileQueue to avoid having to set them all by hand. I > propose adding that to nifi.properties. Call it > "nifi.queue.flowfile.expiration.period" and give it a default of "0 min" to > match the existing hardcoded default for backwards compatibility purposes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8180) Add default flowfile expiration period to nifi.properties
[ https://issues.apache.org/jira/browse/NIFI-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler updated NIFI-8180: -- Status: Open (was: Patch Available) > Add default flowfile expiration period to nifi.properties > - > > Key: NIFI-8180 > URL: https://issues.apache.org/jira/browse/NIFI-8180 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.12.1 >Reporter: Jon Kessler >Assignee: Jon Kessler >Priority: Minor > Time Spent: 1h 20m > Remaining Estimate: 0h > > I have a use case where I would like a default flowfile expiration period for > each newly created FlowFileQueue to avoid having to set them all by hand. I > propose adding that to nifi.properties. Call it > "nifi.queue.flowfile.expiration.period" and give it a default of "0 min" to > match the existing hardcoded default for backwards compatibility purposes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8180) Add default flowfile expiration period to nifi.properties
[ https://issues.apache.org/jira/browse/NIFI-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler updated NIFI-8180: -- Status: Patch Available (was: In Progress) > Add default flowfile expiration period to nifi.properties > - > > Key: NIFI-8180 > URL: https://issues.apache.org/jira/browse/NIFI-8180 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.12.1 >Reporter: Jon Kessler >Assignee: Jon Kessler >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > I have a use case where I would like a default flowfile expiration period for > each newly created FlowFileQueue to avoid having to set them all by hand. I > propose adding that to nifi.properties. Call it > "nifi.queue.flowfile.expiration.period" and give it a default of "0 min" to > match the existing hardcoded default for backwards compatibility purposes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-8180) Add default flowfile expiration period to nifi.properties
Jon Kessler created NIFI-8180: - Summary: Add default flowfile expiration period to nifi.properties Key: NIFI-8180 URL: https://issues.apache.org/jira/browse/NIFI-8180 Project: Apache NiFi Issue Type: Improvement Affects Versions: 1.12.1 Reporter: Jon Kessler Assignee: Jon Kessler I have a use case where I would like a default flowfile expiration period for each newly created FlowFileQueue to avoid having to set them all by hand. I propose adding that to nifi.properties. Call it "nifi.queue.flowfile.expiration.period" and give it a default of "0 min" to match the existing hardcoded default for backwards compatibility purposes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-8126) Include Total Queued Duration in metrics reported via ConnectionStatus
[ https://issues.apache.org/jira/browse/NIFI-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler updated NIFI-8126: -- Status: Patch Available (was: In Progress) > Include Total Queued Duration in metrics reported via ConnectionStatus > -- > > Key: NIFI-8126 > URL: https://issues.apache.org/jira/browse/NIFI-8126 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.12.1 >Reporter: Jon Kessler >Assignee: Jon Kessler >Priority: Minor > Labels: Metrics, reporting_task > Time Spent: 20m > Remaining Estimate: 0h > > On the graph, when listing a queue, you are able to see the queued duration > for individual flowfiles. I believe that either a total queued duration or an > average queued duration for the connection as a whole would be a valuable > metric to include in the ConnectionStatus object that is available to > ReportingTasks. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-8126) Include Total Queued Duration in metrics reported via ConnectionStatus
[ https://issues.apache.org/jira/browse/NIFI-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17262627#comment-17262627 ] Jon Kessler commented on NIFI-8126: --- [~EndzeitBegins] Max queue duration is easy enough to include so I will do so. And I agree that a requirement regarding lineage duration should be a separate Jira issue. > Include Total Queued Duration in metrics reported via ConnectionStatus > -- > > Key: NIFI-8126 > URL: https://issues.apache.org/jira/browse/NIFI-8126 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.12.1 >Reporter: Jon Kessler >Assignee: Jon Kessler >Priority: Minor > Labels: Metrics, reporting_task > > On the graph, when listing a queue, you are able to see the queued duration > for individual flowfiles. I believe that either a total queued duration or an > average queued duration for the connection as a whole would be a valuable > metric to include in the ConnectionStatus object that is available to > ReportingTasks. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-8126) Include Total Queued Duration in metrics reported via ConnectionStatus
Jon Kessler created NIFI-8126: - Summary: Include Total Queued Duration in metrics reported via ConnectionStatus Key: NIFI-8126 URL: https://issues.apache.org/jira/browse/NIFI-8126 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Affects Versions: 1.12.1 Reporter: Jon Kessler Assignee: Jon Kessler On the graph, when listing a queue, you are able to see the queued duration for individual flowfiles. I believe that either a total queued duration or an average queued duration for the connection as a whole would be a valuable metric to include in the ConnectionStatus object that is available to ReportingTasks. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6866) Create a persistent or stateful stats repository
[ https://issues.apache.org/jira/browse/NIFI-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16977433#comment-16977433 ] Jon Kessler commented on NIFI-6866: --- In hindsight state management probably isn't the best spot for this. > Create a persistent or stateful stats repository > > > Key: NIFI-6866 > URL: https://issues.apache.org/jira/browse/NIFI-6866 > Project: Apache NiFi > Issue Type: New Feature > Components: Core Framework >Reporter: Jon Kessler >Assignee: Jon Kessler >Priority: Minor > > Create a stats repository that will hold stats across a restart. This could > be useful in diagnosing issues that required or caused a restart by giving a > better picture of what was happening leading up to the restart. I believe > this can be accomplished using NiFi's state management. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-6866) Create a persistent or stateful stats repository
Jon Kessler created NIFI-6866: - Summary: Create a persistent or stateful stats repository Key: NIFI-6866 URL: https://issues.apache.org/jira/browse/NIFI-6866 Project: Apache NiFi Issue Type: New Feature Components: Core Framework Reporter: Jon Kessler Create a stats repository that will hold stats across a restart. This could be useful in diagnosing issues that required or caused a restart by giving a better picture of what was happening leading up to the restart. I believe this can be accomplished using NiFi's state management. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-6866) Create a persistent or stateful stats repository
[ https://issues.apache.org/jira/browse/NIFI-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler reassigned NIFI-6866: - Assignee: Jon Kessler > Create a persistent or stateful stats repository > > > Key: NIFI-6866 > URL: https://issues.apache.org/jira/browse/NIFI-6866 > Project: Apache NiFi > Issue Type: New Feature > Components: Core Framework >Reporter: Jon Kessler >Assignee: Jon Kessler >Priority: Minor > > Create a stats repository that will hold stats across a restart. This could > be useful in diagnosing issues that required or caused a restart by giving a > better picture of what was happening leading up to the restart. I believe > this can be accomplished using NiFi's state management. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6831) Create a flowfile queue implementation with global data priority awareness
[ https://issues.apache.org/jira/browse/NIFI-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16972495#comment-16972495 ] Jon Kessler commented on NIFI-6831: --- I've attached a template and companion sample ruleset to give a basic demonstration of this new capability. I suggest allowing flowfiles to accumulate in the queues before the last set of UpdateAttribute processors for a while (get 20kish in there) before starting them. You will see flowfiles travel through that last set of processors at different rates based on their priorities. Prior to starting nifi you must enable this feature in nifi.properties[1]. I also suggest increasing the swap threshold for queues to avoid swapping (swap is implemented, it just obviously slows things down). Lastly you must add priority rules via the context menu at the top right of the UI before any staggering will occur. See the attached sample for what I used for this template. [1] nifi.controller.flowfilequeue.buckets=true > Create a flowfile queue implementation with global data priority awareness > -- > > Key: NIFI-6831 > URL: https://issues.apache.org/jira/browse/NIFI-6831 > Project: Apache NiFi > Issue Type: New Feature > Components: Core Framework >Affects Versions: 1.11.0 >Reporter: Jon Kessler >Assignee: Jon Kessler >Priority: Major > Attachments: Priority Rules for demo.jpg, Priority_Demo.xml > > Time Spent: 10m > Remaining Estimate: 0h > > There is currently no way to process data in order by priority on a flow-wide > or global scale. There are several issues with the way sorting by priority > attribute is currently done in the framework that I believe we can address > with a new flowfile queue implementation. Those shortcomings are: > * Scheduling: No consideration is given to data priority when determining > which component is given the next available thread with which to work > * Constant sorting: Because all flowfiles in a given connection share the > same PriorityQueue they must be sorted every time they move. While this sort > is efficient it can add up as queues grow deep. > * Administration: There is a costly human element to managing the value used > as a priority ranking as priorities change. You must also ensure every > connection in the appropriate flow has the proper prioritizer assigned to it > to make use of the property. > The design goals of this new priority mechanism and flowfile queue > implementation are: > * Instead of using the value of a FlowFile attribute as a ranking, maintain > a set of expression language rules to define your priorities. The highest > ranked rule that a given FlowFile satisfies will be that FlowFile's priority > * Because we have a finite set of priority rules we can utilize a bucket > sort in our connections. One bucket per priority rule. The bucket/rule with > which a FlowFile is associated with will be maintained so that as it moves > through the system we do not have to re-evaluate that Flowfile against our > ruleset unless we have reason to do so. > * Control where in your flow FlowFiles are evaluated against the ruleset > with a new Prioritizer implementation: BucketPrioritizer. > * When this queue implementation is polled it will be able to check state to > see if any data of a higher priority than what it currently contains recently > (within 5s) moved elsewhere in the system. If higher priority data has > recently moved elsewhere, the connection will only provide a FlowFile X% of > the time where X is defined along with the rule. This allows higher priority > data to have more frequent access to threads without thread-starving lower > priority data. > * Rules will be managed via a menu option for the flow and changes to them > take effect instantly. This allows you to change your priorities without > stopping/editing/restarting various components on the graph. > Additional design considerations: > The sorting function here takes place on insertion into any connection on > which a BucketPrioritizer is set. Once a FlowFile has been sorted into a > bucket we maintain that state so that each time it moves into a new > connection we already know in which bucket it should be placed without > needing to have a BucketPrioritizer set on that connection. Each bucket in a > connection is just a FIFO queue so no additional sorting is done. You should > only have to configure connections to use the BucketPrioritizer at points in > your flow where you believe you'll have enough information to accurately > determine priority but not beyond that point unless you want to re-evaluate > downstream for some reason. There is administration involved in setting these > BucketPrioritizers on some conn
[jira] [Updated] (NIFI-6831) Create a flowfile queue implementation with global data priority awareness
[ https://issues.apache.org/jira/browse/NIFI-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler updated NIFI-6831: -- Attachment: Priority Rules for demo.jpg > Create a flowfile queue implementation with global data priority awareness > -- > > Key: NIFI-6831 > URL: https://issues.apache.org/jira/browse/NIFI-6831 > Project: Apache NiFi > Issue Type: New Feature > Components: Core Framework >Affects Versions: 1.11.0 >Reporter: Jon Kessler >Assignee: Jon Kessler >Priority: Major > Attachments: Priority Rules for demo.jpg, Priority_Demo.xml > > Time Spent: 10m > Remaining Estimate: 0h > > There is currently no way to process data in order by priority on a flow-wide > or global scale. There are several issues with the way sorting by priority > attribute is currently done in the framework that I believe we can address > with a new flowfile queue implementation. Those shortcomings are: > * Scheduling: No consideration is given to data priority when determining > which component is given the next available thread with which to work > * Constant sorting: Because all flowfiles in a given connection share the > same PriorityQueue they must be sorted every time they move. While this sort > is efficient it can add up as queues grow deep. > * Administration: There is a costly human element to managing the value used > as a priority ranking as priorities change. You must also ensure every > connection in the appropriate flow has the proper prioritizer assigned to it > to make use of the property. > The design goals of this new priority mechanism and flowfile queue > implementation are: > * Instead of using the value of a FlowFile attribute as a ranking, maintain > a set of expression language rules to define your priorities. The highest > ranked rule that a given FlowFile satisfies will be that FlowFile's priority > * Because we have a finite set of priority rules we can utilize a bucket > sort in our connections. One bucket per priority rule. The bucket/rule with > which a FlowFile is associated with will be maintained so that as it moves > through the system we do not have to re-evaluate that Flowfile against our > ruleset unless we have reason to do so. > * Control where in your flow FlowFiles are evaluated against the ruleset > with a new Prioritizer implementation: BucketPrioritizer. > * When this queue implementation is polled it will be able to check state to > see if any data of a higher priority than what it currently contains recently > (within 5s) moved elsewhere in the system. If higher priority data has > recently moved elsewhere, the connection will only provide a FlowFile X% of > the time where X is defined along with the rule. This allows higher priority > data to have more frequent access to threads without thread-starving lower > priority data. > * Rules will be managed via a menu option for the flow and changes to them > take effect instantly. This allows you to change your priorities without > stopping/editing/restarting various components on the graph. > Additional design considerations: > The sorting function here takes place on insertion into any connection on > which a BucketPrioritizer is set. Once a FlowFile has been sorted into a > bucket we maintain that state so that each time it moves into a new > connection we already know in which bucket it should be placed without > needing to have a BucketPrioritizer set on that connection. Each bucket in a > connection is just a FIFO queue so no additional sorting is done. You should > only have to configure connections to use the BucketPrioritizer at points in > your flow where you believe you'll have enough information to accurately > determine priority but not beyond that point unless you want to re-evaluate > downstream for some reason. There is administration involved in setting these > BucketPrioritizers on some connections but it should be minimal per flow > (sometimes as few as one). > When you delete a rule the next time each FlowFile moves that was already > associated with that rule it will be re-evaluated against the ruleset when it > enters the next connection regardless of whether or not a BucketPrioritizer > was set on that connection. Also FlowFiles that have yet to be evaluated > (have yet to encounter a BucketPrioritizer) will not be staggered. This was a > design decision that if we don't know what a priority is for a given FlowFile > we should get it to that point in the flow as soon as possible. This decision > was a result of empirical evidence that when we did stagger unevaluated data > an incoming flow of high priority data slowed its own upstream processing > down once it was identified and processed as high prio
[jira] [Updated] (NIFI-6831) Create a flowfile queue implementation with global data priority awareness
[ https://issues.apache.org/jira/browse/NIFI-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler updated NIFI-6831: -- Attachment: Priority_Demo.xml > Create a flowfile queue implementation with global data priority awareness > -- > > Key: NIFI-6831 > URL: https://issues.apache.org/jira/browse/NIFI-6831 > Project: Apache NiFi > Issue Type: New Feature > Components: Core Framework >Affects Versions: 1.11.0 >Reporter: Jon Kessler >Assignee: Jon Kessler >Priority: Major > Attachments: Priority_Demo.xml > > Time Spent: 10m > Remaining Estimate: 0h > > There is currently no way to process data in order by priority on a flow-wide > or global scale. There are several issues with the way sorting by priority > attribute is currently done in the framework that I believe we can address > with a new flowfile queue implementation. Those shortcomings are: > * Scheduling: No consideration is given to data priority when determining > which component is given the next available thread with which to work > * Constant sorting: Because all flowfiles in a given connection share the > same PriorityQueue they must be sorted every time they move. While this sort > is efficient it can add up as queues grow deep. > * Administration: There is a costly human element to managing the value used > as a priority ranking as priorities change. You must also ensure every > connection in the appropriate flow has the proper prioritizer assigned to it > to make use of the property. > The design goals of this new priority mechanism and flowfile queue > implementation are: > * Instead of using the value of a FlowFile attribute as a ranking, maintain > a set of expression language rules to define your priorities. The highest > ranked rule that a given FlowFile satisfies will be that FlowFile's priority > * Because we have a finite set of priority rules we can utilize a bucket > sort in our connections. One bucket per priority rule. The bucket/rule with > which a FlowFile is associated with will be maintained so that as it moves > through the system we do not have to re-evaluate that Flowfile against our > ruleset unless we have reason to do so. > * Control where in your flow FlowFiles are evaluated against the ruleset > with a new Prioritizer implementation: BucketPrioritizer. > * When this queue implementation is polled it will be able to check state to > see if any data of a higher priority than what it currently contains recently > (within 5s) moved elsewhere in the system. If higher priority data has > recently moved elsewhere, the connection will only provide a FlowFile X% of > the time where X is defined along with the rule. This allows higher priority > data to have more frequent access to threads without thread-starving lower > priority data. > * Rules will be managed via a menu option for the flow and changes to them > take effect instantly. This allows you to change your priorities without > stopping/editing/restarting various components on the graph. > Additional design considerations: > The sorting function here takes place on insertion into any connection on > which a BucketPrioritizer is set. Once a FlowFile has been sorted into a > bucket we maintain that state so that each time it moves into a new > connection we already know in which bucket it should be placed without > needing to have a BucketPrioritizer set on that connection. Each bucket in a > connection is just a FIFO queue so no additional sorting is done. You should > only have to configure connections to use the BucketPrioritizer at points in > your flow where you believe you'll have enough information to accurately > determine priority but not beyond that point unless you want to re-evaluate > downstream for some reason. There is administration involved in setting these > BucketPrioritizers on some connections but it should be minimal per flow > (sometimes as few as one). > When you delete a rule the next time each FlowFile moves that was already > associated with that rule it will be re-evaluated against the ruleset when it > enters the next connection regardless of whether or not a BucketPrioritizer > was set on that connection. Also FlowFiles that have yet to be evaluated > (have yet to encounter a BucketPrioritizer) will not be staggered. This was a > design decision that if we don't know what a priority is for a given FlowFile > we should get it to that point in the flow as soon as possible. This decision > was a result of empirical evidence that when we did stagger unevaluated data > an incoming flow of high priority data slowed its own upstream processing > down once it was identified and processed as high priority. -- This message was sent by At
[jira] [Assigned] (NIFI-6831) Create a flowfile queue implementation with global data priority awareness
[ https://issues.apache.org/jira/browse/NIFI-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler reassigned NIFI-6831: - Assignee: Jon Kessler > Create a flowfile queue implementation with global data priority awareness > -- > > Key: NIFI-6831 > URL: https://issues.apache.org/jira/browse/NIFI-6831 > Project: Apache NiFi > Issue Type: New Feature > Components: Core Framework >Affects Versions: 1.11.0 >Reporter: Jon Kessler >Assignee: Jon Kessler >Priority: Major > > There is currently no way to process data in order by priority on a flow-wide > or global scale. There are several issues with the way sorting by priority > attribute is currently done in the framework that I believe we can address > with a new flowfile queue implementation. Those shortcomings are: > * Scheduling: No consideration is given to data priority when determining > which component is given the next available thread with which to work > * Constant sorting: Because all flowfiles in a given connection share the > same PriorityQueue they must be sorted every time they move. While this sort > is efficient it can add up as queues grow deep. > * Administration: There is a costly human element to managing the value used > as a priority ranking as priorities change. You must also ensure every > connection in the appropriate flow has the proper prioritizer assigned to it > to make use of the property. > The design goals of this new priority mechanism and flowfile queue > implementation are: > * Instead of using the value of a FlowFile attribute as a ranking, maintain > a set of expression language rules to define your priorities. The highest > ranked rule that a given FlowFile satisfies will be that FlowFile's priority > * Because we have a finite set of priority rules we can utilize a bucket > sort in our connections. One bucket per priority rule. The bucket/rule with > which a FlowFile is associated with will be maintained so that as it moves > through the system we do not have to re-evaluate that Flowfile against our > ruleset unless we have reason to do so. > * Control where in your flow FlowFiles are evaluated against the ruleset > with a new Prioritizer implementation: BucketPrioritizer. > * When this queue implementation is polled it will be able to check state to > see if any data of a higher priority than what it currently contains recently > (within 5s) moved elsewhere in the system. If higher priority data has > recently moved elsewhere, the connection will only provide a FlowFile X% of > the time where X is defined along with the rule. This allows higher priority > data to have more frequent access to threads without thread-starving lower > priority data. > * Rules will be managed via a menu option for the flow and changes to them > take effect instantly. This allows you to change your priorities without > stopping/editing/restarting various components on the graph. > Additional design considerations: > The sorting function here takes place on insertion into any connection on > which a BucketPrioritizer is set. Once a FlowFile has been sorted into a > bucket we maintain that state so that each time it moves into a new > connection we already know in which bucket it should be placed without > needing to have a BucketPrioritizer set on that connection. Each bucket in a > connection is just a FIFO queue so no additional sorting is done. You should > only have to configure connections to use the BucketPrioritizer at points in > your flow where you believe you'll have enough information to accurately > determine priority but not beyond that point unless you want to re-evaluate > downstream for some reason. There is administration involved in setting these > BucketPrioritizers on some connections but it should be minimal per flow > (sometimes as few as one). > When you delete a rule the next time each FlowFile moves that was already > associated with that rule it will be re-evaluated against the ruleset when it > enters the next connection regardless of whether or not a BucketPrioritizer > was set on that connection. Also FlowFiles that have yet to be evaluated > (have yet to encounter a BucketPrioritizer) will not be staggered. This was a > design decision that if we don't know what a priority is for a given FlowFile > we should get it to that point in the flow as soon as possible. This decision > was a result of empirical evidence that when we did stagger unevaluated data > an incoming flow of high priority data slowed its own upstream processing > down once it was identified and processed as high priority. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-6831) Create a flowfile queue implementation with global data priority awareness
Jon Kessler created NIFI-6831: - Summary: Create a flowfile queue implementation with global data priority awareness Key: NIFI-6831 URL: https://issues.apache.org/jira/browse/NIFI-6831 Project: Apache NiFi Issue Type: New Feature Components: Core Framework Affects Versions: 1.11.0 Reporter: Jon Kessler There is currently no way to process data in order by priority on a flow-wide or global scale. There are several issues with the way sorting by priority attribute is currently done in the framework that I believe we can address with a new flowfile queue implementation. Those shortcomings are: * Scheduling: No consideration is given to data priority when determining which component is given the next available thread with which to work * Constant sorting: Because all flowfiles in a given connection share the same PriorityQueue they must be sorted every time they move. While this sort is efficient it can add up as queues grow deep. * Administration: There is a costly human element to managing the value used as a priority ranking as priorities change. You must also ensure every connection in the appropriate flow has the proper prioritizer assigned to it to make use of the property. The design goals of this new priority mechanism and flowfile queue implementation are: * Instead of using the value of a FlowFile attribute as a ranking, maintain a set of expression language rules to define your priorities. The highest ranked rule that a given FlowFile satisfies will be that FlowFile's priority * Because we have a finite set of priority rules we can utilize a bucket sort in our connections. One bucket per priority rule. The bucket/rule with which a FlowFile is associated with will be maintained so that as it moves through the system we do not have to re-evaluate that Flowfile against our ruleset unless we have reason to do so. * Control where in your flow FlowFiles are evaluated against the ruleset with a new Prioritizer implementation: BucketPrioritizer. * When this queue implementation is polled it will be able to check state to see if any data of a higher priority than what it currently contains recently (within 5s) moved elsewhere in the system. If higher priority data has recently moved elsewhere, the connection will only provide a FlowFile X% of the time where X is defined along with the rule. This allows higher priority data to have more frequent access to threads without thread-starving lower priority data. * Rules will be managed via a menu option for the flow and changes to them take effect instantly. This allows you to change your priorities without stopping/editing/restarting various components on the graph. Additional design considerations: The sorting function here takes place on insertion into any connection on which a BucketPrioritizer is set. Once a FlowFile has been sorted into a bucket we maintain that state so that each time it moves into a new connection we already know in which bucket it should be placed without needing to have a BucketPrioritizer set on that connection. Each bucket in a connection is just a FIFO queue so no additional sorting is done. You should only have to configure connections to use the BucketPrioritizer at points in your flow where you believe you'll have enough information to accurately determine priority but not beyond that point unless you want to re-evaluate downstream for some reason. There is administration involved in setting these BucketPrioritizers on some connections but it should be minimal per flow (sometimes as few as one). When you delete a rule the next time each FlowFile moves that was already associated with that rule it will be re-evaluated against the ruleset when it enters the next connection regardless of whether or not a BucketPrioritizer was set on that connection. Also FlowFiles that have yet to be evaluated (have yet to encounter a BucketPrioritizer) will not be staggered. This was a design decision that if we don't know what a priority is for a given FlowFile we should get it to that point in the flow as soon as possible. This decision was a result of empirical evidence that when we did stagger unevaluated data an incoming flow of high priority data slowed its own upstream processing down once it was identified and processed as high priority. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-6493) Allow components access to the VariableRegistry
Jon Kessler created NIFI-6493: - Summary: Allow components access to the VariableRegistry Key: NIFI-6493 URL: https://issues.apache.org/jira/browse/NIFI-6493 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Affects Versions: 1.9.2 Reporter: Jon Kessler I believe it would be beneficial to allow processors and controller services read and write access to their process group's variable registry through the framework. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (NIFI-5378) NiFiProperties: Add check to validation to prevent startup when duplicate keys are encountered
Jon Kessler created NIFI-5378: - Summary: NiFiProperties: Add check to validation to prevent startup when duplicate keys are encountered Key: NIFI-5378 URL: https://issues.apache.org/jira/browse/NIFI-5378 Project: Apache NiFi Issue Type: Improvement Components: Configuration Affects Versions: 1.7.0 Reporter: Jon Kessler Currently duplicate keys can exist in the nifi.properties file and you aren't necessarily guaranteed or sure which value will be used when the system starts. There is already a method in NiFiProperties.java for validation at startup. Logic should be added so that when duplicates are encountered the system is not permitted to start. Also include a good log message to say what the duplicates are and what the issue was. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5377) StandardNiFiServiceFacade: Recursive method call allows for infinite loop when a circular reference exists
Jon Kessler created NIFI-5377: - Summary: StandardNiFiServiceFacade: Recursive method call allows for infinite loop when a circular reference exists Key: NIFI-5377 URL: https://issues.apache.org/jira/browse/NIFI-5377 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.7.0 Reporter: Jon Kessler At a minimum, when you attempt to view a list of controller services in the gui this specific method is called to obtain a set of referenced controller service identifiers. If there is a circular dependency in that set, you end up with an infinite loop that ultimately results the user being redirected to an error page in the gui. The method in question is findControllerServiceReferencingComponentIdentifiers. It checks to see if each node has been visited already but does not add them to the set until after recursively calling itself again. If the line "visited.add(node);" is moved above the method call, this will be resolved. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4992) RedisStateManager/RedisStateProvider: Add get/set with key method
Jon Kessler created NIFI-4992: - Summary: RedisStateManager/RedisStateProvider: Add get/set with key method Key: NIFI-4992 URL: https://issues.apache.org/jira/browse/NIFI-4992 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Affects Versions: 1.5.0 Reporter: Jon Kessler So that you don't have to serialize/deserialize a map with each getState and setState, provide methods to allow a key to be provided as well. Make use of redis's concept of keyspace rather than storing the entire map with each operation. The existing getState and setState methods would still appear to work the same way they previously did but would also need to be updated to be consistent with the new methods. This would also require updates to the StateManager and StateProvider interfaces as well as all of their implementations or perhaps an extension that Redis* could then extend. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4829) MockControllerServiceInitializationContext: Update getControllerServiceName to function properly vs always returning null
Jon Kessler created NIFI-4829: - Summary: MockControllerServiceInitializationContext: Update getControllerServiceName to function properly vs always returning null Key: NIFI-4829 URL: https://issues.apache.org/jira/browse/NIFI-4829 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Affects Versions: 1.5.0 Reporter: Jon Kessler Currently this method is hardcoded to return null. When using the TestRunner to configure and enable controller services, those controller services will have access to a ControllerServiceLookup object that is aware of all controller services that have been added to that TestRunner. Rather than returning null, that method should use the lookup when it is available. [https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-mock/src/main/java/org/apache/nifi/util/MockControllerServiceInitializationContext.java;hb=HEAD#l54] Something like: {code:java} @Override public String getControllerServiceName(final String serviceIdentifier) { return getControllerServiceLookup() != null ? getControllerServiceLookup().getControllerServiceName(serviceIdentifier) : null; }{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4813) ControllerServiceLookup: Add method signature
[ https://issues.apache.org/jira/browse/NIFI-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Kessler updated NIFI-4813: -- Description: The ControllerServiceLookup interface contains the following method signature: {code:java} Set getControllerServiceIdentifiers(Class serviceType) throws IllegalArgumentException;{code} Its implementation in the StandardControllerServiceProvider has more or less been deprecated as it now just throws an UnsupportedOperationException. It has been replaced in that class by the following method but that signature has not yet been added to the interface: {code:java} public Set getControllerServiceIdentifiers(final Class serviceType, final String groupId){code} This causes a problem for processors that used the former method via getControllerServiceLookup() as that method only returns the interface. Therefore it should be added to the interface. [1][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/service/StandardControllerServiceProvider.java;hb=HEAD#l662] [2][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/service/StandardControllerServiceProvider.java;hb=HEAD#l798] [3][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-api/src/main/java/org/apache/nifi/controller/ControllerServiceLookup.java;hb=HEAD#l64] was: The ControllerServiceLookup interface contains the following method signature: {code:java} Set getControllerServiceIdentifiers(Class serviceType) throws IllegalArgumentException;{code} Its implementation in the StandardControllerServiceProvider has more or less been deprecated as it now just throws an UnsupportedOperationException. It has been replaced in that class by the following method but that signature has not yet been added to the interface: {code:java} public Set getControllerServiceIdentifiers(final Class serviceType, final String groupId){code} This causes a problem for processors that used the former method via getControllerServiceLookup() as that class only returns the interface. Therefore it should be added to the interface. [1][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/service/StandardControllerServiceProvider.java;hb=HEAD#l662] [2][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/service/StandardControllerServiceProvider.java;hb=HEAD#l798] [3][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-api/src/main/java/org/apache/nifi/controller/ControllerServiceLookup.java;hb=HEAD#l64] > ControllerServiceLookup: Add method signature > - > > Key: NIFI-4813 > URL: https://issues.apache.org/jira/browse/NIFI-4813 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.4.0 >Reporter: Jon Kessler >Priority: Minor > > The ControllerServiceLookup interface contains the following method signature: > {code:java} > Set getControllerServiceIdentifiers(Class ControllerService> serviceType) throws IllegalArgumentException;{code} > Its implementation in the StandardControllerServiceProvider has more or less > been deprecated as it now just throws an UnsupportedOperationException. It > has been replaced in that class by the following method but that signature > has not yet been added to the interface: > {code:java} > public Set getControllerServiceIdentifiers(final Class ControllerService> serviceType, final String groupId){code} > This causes a problem for processors that used the former method via > getControllerServiceLookup() as that method only returns the interface. > Therefore it should be added to the interface. > [1][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/service/StandardControllerServiceProvider.java;hb=HEAD#l662] > > [2][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/service/StandardControllerServiceProvider.java;hb=HEAD#l798] > > [3][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-api/src/main/java/org/apache/nifi/controller/ControllerServiceLookup.java;hb=HEAD#l64] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4813) ControllerServiceLookup: Add method signature
Jon Kessler created NIFI-4813: - Summary: ControllerServiceLookup: Add method signature Key: NIFI-4813 URL: https://issues.apache.org/jira/browse/NIFI-4813 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Affects Versions: 1.4.0 Reporter: Jon Kessler The ControllerServiceLookup interface contains the following method signature: {code:java} Set getControllerServiceIdentifiers(Class serviceType) throws IllegalArgumentException;{code} Its implementation in the StandardControllerServiceProvider has more or less been deprecated as it now just throws an UnsupportedOperationException. It has been replaced in that class by the following method but that signature has not yet been added to the interface: {code:java} public Set getControllerServiceIdentifiers(final Class serviceType, final String groupId){code} This causes a problem for processors that used the former method via getControllerServiceLookup() as that class only returns the interface. Therefore it should be added to the interface. [1][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/service/StandardControllerServiceProvider.java;hb=HEAD#l662] [2][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/service/StandardControllerServiceProvider.java;hb=HEAD#l798] [3][https://git-wip-us.apache.org/repos/asf?p=nifi.git;a=blob;f=nifi-api/src/main/java/org/apache/nifi/controller/ControllerServiceLookup.java;hb=HEAD#l64] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4505) MapCache/SimpleMaCache/PersistentMapCache: Add keyset method
Jon Kessler created NIFI-4505: - Summary: MapCache/SimpleMaCache/PersistentMapCache: Add keyset method Key: NIFI-4505 URL: https://issues.apache.org/jira/browse/NIFI-4505 Project: Apache NiFi Issue Type: Improvement Affects Versions: 1.4.0 Reporter: Jon Kessler Priority: Minor Suggest adding a keyset method to the MapCache and implementations as well as to any client/interface that make use of a MapCache. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4504) SimpleMapCache/PersistentMapCache: Add removeAndGet and removeByPatternAndGet
Jon Kessler created NIFI-4504: - Summary: SimpleMapCache/PersistentMapCache: Add removeAndGet and removeByPatternAndGet Key: NIFI-4504 URL: https://issues.apache.org/jira/browse/NIFI-4504 Project: Apache NiFi Issue Type: Improvement Affects Versions: 1.4.0 Reporter: Jon Kessler Priority: Minor Typical map implementations return the value that was removed when performing a remove. Because you couldn't update the existing remove methods without it being a breaking change I suggest adding new versions of the remove and removeByPattern methods that return the removed value(s). These changes should also be applied up the chain to any class that makes use of these classes such as the MapCacheServer and AtomicDistributedMapCacheClient. -- This message was sent by Atlassian JIRA (v6.4.14#64029)