[jira] [Commented] (NIFI-4385) Adjust the QueryDatabaseTable processor for handling big tables.
[ https://issues.apache.org/jira/browse/NIFI-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17816151#comment-17816151 ] Peter Wicks commented on NIFI-4385: --- Hi [~readl1] . If before your flow was expecting all rows in a single FlowFile, it would get multiple flow files after this change, based on batch size. So if you used to have a single 100k row FlowFile once an hour, now you would get 100 flow files per hour. Depending on the workflow, that could be a big deal. I think the important thing is that you can do what you want to do, it just requires a config change at the processor level based on your needs. > Adjust the QueryDatabaseTable processor for handling big tables. > > > Key: NIFI-4385 > URL: https://issues.apache.org/jira/browse/NIFI-4385 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.3.0 >Reporter: Tim Späth >Priority: Major > > When querying large database tables, the *QueryDatabaseTable* processor does > not perform very well. > The processor will always perform the full query and then transfer all > flowfiles as a list instead of > transferring them particularly after the *ResultSet* is fetching the next > rows(If a fetch size is given). > If you want to query a billion rows from a table, > the processor will add all flowfiles in an ArrayList in memory > before transferring the whole list after the last row is fetched by the > ResultSet. > I've checked the code in > *org.apache.nifi.processors.standard.QueryDatabaseTable.java* > and in my opinion, it would be no big deal to move the session.transfer to a > proper position in the code (into the while loop where the flowfile is added > to the list) to > achieve a real _stream support_. There was also a bug report for this problem > which resulted in adding the new property *Maximum Number of Fragments*, > but this property will just limit the results. > Now you have to multiply *Max Rows Per Flow File* with *Maximum Number of > Fragments* to get your limit, > which is not really a solution for the original problem imho. > Also the workaround with GenerateTableFetch and/or ExecuteSQL processors is > much slower than using a database cursor or a ResultSet > and stream the rows in flowfiles directly in the queue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-8119) ExecuteSQL does not properly free database ressources
[ https://issues.apache.org/jira/browse/NIFI-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275764#comment-17275764 ] Peter Wicks commented on NIFI-8119: --- [~cgumpert] We migrated off of Teradata a few years back, but used NiFi with it for many years. During that time I setup a CI/CD environment using the Teradata Express VM image and had some success there with testing changes before releasing to PROD. Might help you to test your own changes/upgrades in advance. https://downloads.teradata.com/download/database/teradata-express-for-vmware-player > ExecuteSQL does not properly free database ressources > - > > Key: NIFI-8119 > URL: https://issues.apache.org/jira/browse/NIFI-8119 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.11.2 >Reporter: Christian Gumpert >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > We are using Nifi to ingest data from a Teradata database into our S3-based > data lake using a typical pattern of GenerateTableFetch and ExecuteSQL > processors. Our Teradata database tables contain columns of type CLOB (which > contains some JSON data). > We have installed the Teradata JDBC driver from the Teradata Tools and > Utilities package version 16.10.26.00 as described in this [Cloudera > community > article|https://community.cloudera.com/t5/Community-Articles/Using-Teradata-JDBC-connector-in-NiFi/ta-p/246783]. > After having configured a DBConnectionPool service with the Teradata > connection parameters we are able to execute our flow. The GenerateTableFetch > processors generates flowfiles containing SQL Queries which are then executed > by the ExecuteSQL processor. > After having processed the first 15 flowfiles the ExecuteSQL processor yields > the following error: > {noformat} > 2020-12-17T12:53:17+01:00 L921000109090A nifi-app.log: 2020-12-17 > 12:53:11,578 ERROR [Timer-Driven Process Thread-2] > o.a.nifi.processors.standard.ExecuteSQL > ExecuteSQL[id=afa23b0f-2e57-1fb6-d047-13646de03ebf] Unable to execute SQL > select query call devezv_replworkedec.get_edec_meldung(561, 562, 2, 0); for > StandardFlowFileRecord[uuid=ff7219a7-14e9-404e-a57a-28121653fed8,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1607610786368-4986, container=repo0, > section=890], offset=701888, > length=1077672],offset=32266,name=ff7219a7-14e9-404e-a57a-28121653fed8,size=58] > due to java.sql.SQLException: [Teradata Database] [TeraJDBC 16.10.00.07] > [Error 3130] [SQLState HY000] GET_EDEC_MELDUNG:Response limit exceeded.; > routing to failure: java.sql.SQLException: [Teradata Database] [TeraJDBC > 16.10.00.07] [Error 3130] [SQLState HY000] GET_EDEC_MELDUNG:Response limit > exceeded. > java.sql.SQLException: [Teradata Database] [TeraJDBC 16.10.00.07] [Error > 3130] [SQLState HY000] GET_EDEC_MELDUNG:Response limit exceeded. > at > com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDatabaseSQLException(ErrorFactory.java:309) > at > com.teradata.jdbc.jdbc_4.statemachine.ReceiveInitSubState.action(ReceiveInitSubState.java:103) > at > com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.subStateMachine(StatementReceiveState.java:311) > at > com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.action(StatementReceiveState.java:200) > at > com.teradata.jdbc.jdbc_4.statemachine.StatementController.runBody(StatementController.java:137) > at > com.teradata.jdbc.jdbc_4.statemachine.PreparedStatementController.run(PreparedStatementController.java:46) > at > com.teradata.jdbc.jdbc_4.TDStatement.executeStatement(TDStatement.java:389) > at > com.teradata.jdbc.jdbc_4.TDStatement.executeStatement(TDStatement.java:331) > at > com.teradata.jdbc.jdbc_4.TDPreparedStatement.doPrepExecute(TDPreparedStatement.java:177) > at > com.teradata.jdbc.jdbc_4.TDPreparedStatement.execute(TDPreparedStatement.java:2778) > at > org.apache.commons.dbcp2.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:94) > at > org.apache.commons.dbcp2.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:94) > at > org.apache.nifi.processors.standard.AbstractExecuteSQL.onTrigger(AbstractExecuteSQL.java:266) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1176) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:213) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) > at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) >
[jira] [Updated] (NIFI-7861) Arrow Flight Server Controller and Processors
[ https://issues.apache.org/jira/browse/NIFI-7861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-7861: -- Description: * Create an Arrow Flight controller service that hosts an Arrow Flight service in NiFi * Create a Processor to read data from an Arrow Flight Server * Create a Processor to write data to an Arrow Flight Server * Create a Processor to fetch data from an Arrow Flight Server ? Due to the in-memory nature of the Arrow Table format, exposing record reader/writer services for cross processor consumption does not make sense, since the Arrow Table format does not exist on disk. was: * Create an Arrow Flight controller service that hosts an Arrow Flight service in NiFi * Create a Processor to read data from an Arrow Flight Server * Create a Processor to write data to an Arrow Flight Server Due to the in-memory nature of the Arrow Table format, exposing record reader/writer services for cross processor consumption does not make sense, since the Arrow Table format does not exist on disk. > Arrow Flight Server Controller and Processors > - > > Key: NIFI-7861 > URL: https://issues.apache.org/jira/browse/NIFI-7861 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > > * Create an Arrow Flight controller service that hosts an Arrow Flight > service in NiFi > * Create a Processor to read data from an Arrow Flight Server > * Create a Processor to write data to an Arrow Flight Server > * Create a Processor to fetch data from an Arrow Flight Server ? > > Due to the in-memory nature of the Arrow Table format, exposing record > reader/writer services for cross processor consumption does not make sense, > since the Arrow Table format does not exist on disk. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7861) Arrow Flight Server Controller and Processors
Peter Wicks created NIFI-7861: - Summary: Arrow Flight Server Controller and Processors Key: NIFI-7861 URL: https://issues.apache.org/jira/browse/NIFI-7861 Project: Apache NiFi Issue Type: New Feature Components: Extensions Reporter: Peter Wicks Assignee: Peter Wicks * Create an Arrow Flight controller service that hosts an Arrow Flight service in NiFi * Create a Processor to read data from an Arrow Flight Server * Create a Processor to write data to an Arrow Flight Server Due to the in-memory nature of the Arrow Table format, exposing record reader/writer services for cross processor consumption does not make sense, since the Arrow Table format does not exist on disk. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7805) ResultSetRecordSet Breaks if First Row Decimal Value is NULL
Peter Wicks created NIFI-7805: - Summary: ResultSetRecordSet Breaks if First Row Decimal Value is NULL Key: NIFI-7805 URL: https://issues.apache.org/jira/browse/NIFI-7805 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.12.0 Reporter: Peter Wicks Assignee: Peter Wicks Changes in NIFI-7369 caused a breaking change in how Decimal values are converted into schemas in ResultSetRecordSet. The code tries to read the scale and precision values, but reads them from the value in the first row returned, rather than from the ResultSetMetadata. If the value in the first row is NULL, than it's scale and precision cannot be read and schema creation fails. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-539) Add SCP Processor
[ https://issues.apache.org/jira/browse/NIFI-539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17184658#comment-17184658 ] Peter Wicks commented on NIFI-539: -- I started working again on these processors a few months ago, but got frustrated again... [~joewitt] The way FTP/SFTP processors were built, with them bundled into the core processors NAR, makes it really hard to build an SSH/SCP set of processors in a separate NAR without a lot of code duplication, which I was trying to avoid. Which do you think would be better, to add the SSH/SCP processors to the core processors NAR, or add a lot of code duplication and have SSH/SCP live by themselves in a new NAR? > Add SCP Processor > - > > Key: NIFI-539 > URL: https://issues.apache.org/jira/browse/NIFI-539 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework >Reporter: Edgardo Vega >Assignee: Peter Wicks >Priority: Major > Labels: beginner > Time Spent: 20m > Remaining Estimate: 0h > > A simple and powerful processor would be one that can perform scp file > transfers. SCP is generally much faster on file transfers especially on high > latency networks. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-3229) When a queue contains only Penalized FlowFile's the next processor Tasks/Time statistics becomes extremely large
[ https://issues.apache.org/jira/browse/NIFI-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17090172#comment-17090172 ] Peter Wicks commented on NIFI-3229: --- [~Kirhold] I've unassigned myself from this ticket. While I agree that I too would like to see a fix, but after discussing this with one of the experts, I don't know of the correct path forward. Would love to have someone else pick up the case. > When a queue contains only Penalized FlowFile's the next processor Tasks/Time > statistics becomes extremely large > > > Key: NIFI-3229 > URL: https://issues.apache.org/jira/browse/NIFI-3229 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Dmitry Lukyanov >Priority: Minor > Attachments: flow.xml.gz, nifi-stats.png, nifi-stats2.png > > Time Spent: 20m > Remaining Estimate: 0h > > fetchfile on `not.found` produces penalized flow file > in this case i'm expecting the next processor will do one task execution when > flow file penalize time over. > but according to stats it executes approximately 1-6 times. > i understand that it could be a feature but stats became really unclear... > maybe there should be two columns? > `All Task/Times` and `Committed Task/Times` -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-3229) When a queue contains only Penalized FlowFile's the next processor Tasks/Time statistics becomes extremely large
[ https://issues.apache.org/jira/browse/NIFI-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks reassigned NIFI-3229: - Assignee: (was: Peter Wicks) > When a queue contains only Penalized FlowFile's the next processor Tasks/Time > statistics becomes extremely large > > > Key: NIFI-3229 > URL: https://issues.apache.org/jira/browse/NIFI-3229 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Dmitry Lukyanov >Priority: Minor > Attachments: flow.xml.gz, nifi-stats.png, nifi-stats2.png > > Time Spent: 20m > Remaining Estimate: 0h > > fetchfile on `not.found` produces penalized flow file > in this case i'm expecting the next processor will do one task execution when > flow file penalize time over. > but according to stats it executes approximately 1-6 times. > i understand that it could be a feature but stats became really unclear... > maybe there should be two columns? > `All Task/Times` and `Committed Task/Times` -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-6862) Add Large Batch support to PutSQL
Peter Wicks created NIFI-6862: - Summary: Add Large Batch support to PutSQL Key: NIFI-6862 URL: https://issues.apache.org/jira/browse/NIFI-6862 Project: Apache NiFi Issue Type: Task Components: Extensions Reporter: Peter Wicks Assignee: Peter Wicks If PutSQL executes an Insert Statement that selects from table `A` and inserts into table `B` the affected row count can exceed Int32.MaxValue. This causes the execution to fail because the affected row count can't be captured as an integer. The fix is to execute using `executeLargeUpdate`, which returns the affected row count as a long. Not all JDBC drivers support executing large batches, so we need to support current `executeBatch` mode as well, with users opting in to Large Batch mode. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-6684) Add more property to Hive3ConnectionPool
[ https://issues.apache.org/jira/browse/NIFI-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-6684. --- Resolution: Fixed Fixed latent checkstyle violation > Add more property to Hive3ConnectionPool > > > Key: NIFI-6684 > URL: https://issues.apache.org/jira/browse/NIFI-6684 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: jamescheng >Assignee: jamescheng >Priority: Minor > Fix For: 1.10.0 > > Attachments: PutHive3 enhance.png > > Time Spent: 0.5h > Remaining Estimate: 0h > > The Hive3ConnectionPool is similar with DBCPConnectionPool as both of them > are using DBCP BasicDataSource. However, Hive3ConnectionPool doesn't provide > some properties of what DBCPConnectionPool has. Such as "Minimum Idle > Connections", "Max Idle Connections", "Max Connection Lifetime", "Time > Between Eviction Runs", "Minimum Evictable Idle Time" and "Soft Minimum > Evictable Idle Time". > This improvement is try to provide more properties for developer to set. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6684) Add more property to Hive3ConnectionPool
[ https://issues.apache.org/jira/browse/NIFI-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944556#comment-16944556 ] Peter Wicks commented on NIFI-6684: --- [~turcsanyip] Woops. I saw an earlier one when I was reviewing, but missed this one. I wonder why it didn't get picked up by Travis CI. I'll push the checkstyle fix in a minute, since 1.10 is so close. > Add more property to Hive3ConnectionPool > > > Key: NIFI-6684 > URL: https://issues.apache.org/jira/browse/NIFI-6684 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: jamescheng >Assignee: jamescheng >Priority: Minor > Fix For: 1.10.0 > > Attachments: PutHive3 enhance.png > > Time Spent: 0.5h > Remaining Estimate: 0h > > The Hive3ConnectionPool is similar with DBCPConnectionPool as both of them > are using DBCP BasicDataSource. However, Hive3ConnectionPool doesn't provide > some properties of what DBCPConnectionPool has. Such as "Minimum Idle > Connections", "Max Idle Connections", "Max Connection Lifetime", "Time > Between Eviction Runs", "Minimum Evictable Idle Time" and "Soft Minimum > Evictable Idle Time". > This improvement is try to provide more properties for developer to set. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-5702) FlowFileRepo should not discard data (at least not by default)
[ https://issues.apache.org/jira/browse/NIFI-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-5702: -- Affects Version/s: 1.7.1 1.9.2 > FlowFileRepo should not discard data (at least not by default) > -- > > Key: NIFI-5702 > URL: https://issues.apache.org/jira/browse/NIFI-5702 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.7.1, 1.9.2 >Reporter: Brandon Rhys DeVries >Priority: Major > > The WriteAheadFlowFileRepository currently discards data it cannot find a > queue for. Unfortunately, we have run in to issues where, when rejoining a > node to a cluster, the flow.xml.gz can go "missing". This results in the > instance creating a new, empty, flow.xml.gz and then continuing on... and not > finding queues for any of its existing data, dropping it all. Regardless of > the circumstances leading to an empty (or unexpectedly modified) flow.xml.gz, > dropping data without user input seems less than ideal. > Internally, my group has added a property > "remove.orphaned.flowfiles.on.startup", defaulting to "false". On > startup, rather than silently dropping data, the repo will throw an exception > preventing startup. The operator can then choose to either "fix" any > unexpected issues with the flow.xml.gz, or they can set the above property to > "true" which restores the original behavior allowing the system to be > restarted. When set to "true" this property also results in a warning > message indicating that in this configuration the repo can drop data without > (advance) warning. > > > [1] > https://github.com/apache/nifi/blob/support/nifi-1.7.x/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/WriteAheadFlowFileRepository.java#L596 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-5702) FlowFileRepo should not discard data (at least not by default)
[ https://issues.apache.org/jira/browse/NIFI-5702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943636#comment-16943636 ] Peter Wicks commented on NIFI-5702: --- We just had this issue happen in our cluster. When we discussed this same problem internally, we had come up with having a setting that would not let NiFi start without a flow.xml. Either the flow has to exist locally, or it must be provided by the cluster. Feels like a similar solution, though different in the details. > FlowFileRepo should not discard data (at least not by default) > -- > > Key: NIFI-5702 > URL: https://issues.apache.org/jira/browse/NIFI-5702 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Brandon Rhys DeVries >Priority: Major > > The WriteAheadFlowFileRepository currently discards data it cannot find a > queue for. Unfortunately, we have run in to issues where, when rejoining a > node to a cluster, the flow.xml.gz can go "missing". This results in the > instance creating a new, empty, flow.xml.gz and then continuing on... and not > finding queues for any of its existing data, dropping it all. Regardless of > the circumstances leading to an empty (or unexpectedly modified) flow.xml.gz, > dropping data without user input seems less than ideal. > Internally, my group has added a property > "remove.orphaned.flowfiles.on.startup", defaulting to "false". On > startup, rather than silently dropping data, the repo will throw an exception > preventing startup. The operator can then choose to either "fix" any > unexpected issues with the flow.xml.gz, or they can set the above property to > "true" which restores the original behavior allowing the system to be > restarted. When set to "true" this property also results in a warning > message indicating that in this configuration the repo can drop data without > (advance) warning. > > > [1] > https://github.com/apache/nifi/blob/support/nifi-1.7.x/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/repository/WriteAheadFlowFileRepository.java#L596 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-6684) Add more property to Hive3ConnectionPool
[ https://issues.apache.org/jira/browse/NIFI-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-6684. --- Fix Version/s: 1.10.0 Resolution: Fixed Signed-off and merged. > Add more property to Hive3ConnectionPool > > > Key: NIFI-6684 > URL: https://issues.apache.org/jira/browse/NIFI-6684 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: jamescheng >Assignee: jamescheng >Priority: Minor > Fix For: 1.10.0 > > Attachments: PutHive3 enhance.png > > Time Spent: 0.5h > Remaining Estimate: 0h > > The Hive3ConnectionPool is similar with DBCPConnectionPool as both of them > are using DBCP BasicDataSource. However, Hive3ConnectionPool doesn't provide > some properties of what DBCPConnectionPool has. Such as "Minimum Idle > Connections", "Max Idle Connections", "Max Connection Lifetime", "Time > Between Eviction Runs", "Minimum Evictable Idle Time" and "Soft Minimum > Evictable Idle Time". > This improvement is try to provide more properties for developer to set. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-6684) Add more property to Hive3ConnectionPool
[ https://issues.apache.org/jira/browse/NIFI-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks reassigned NIFI-6684: - Assignee: jamescheng (was: Peter Wicks) > Add more property to Hive3ConnectionPool > > > Key: NIFI-6684 > URL: https://issues.apache.org/jira/browse/NIFI-6684 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: jamescheng >Assignee: jamescheng >Priority: Minor > Attachments: PutHive3 enhance.png > > Time Spent: 10m > Remaining Estimate: 0h > > The Hive3ConnectionPool is similar with DBCPConnectionPool as both of them > are using DBCP BasicDataSource. However, Hive3ConnectionPool doesn't provide > some properties of what DBCPConnectionPool has. Such as "Minimum Idle > Connections", "Max Idle Connections", "Max Connection Lifetime", "Time > Between Eviction Runs", "Minimum Evictable Idle Time" and "Soft Minimum > Evictable Idle Time". > This improvement is try to provide more properties for developer to set. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6684) Add more property to Hive3ConnectionPool
[ https://issues.apache.org/jira/browse/NIFI-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16936042#comment-16936042 ] Peter Wicks commented on NIFI-6684: --- [~AxelSync] I was planning to review this, and don't have any concerns. It probably should have been done quite a while ago. > Add more property to Hive3ConnectionPool > > > Key: NIFI-6684 > URL: https://issues.apache.org/jira/browse/NIFI-6684 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: jamescheng >Assignee: Peter Wicks >Priority: Minor > Attachments: PutHive3 enhance.png > > Time Spent: 10m > Remaining Estimate: 0h > > The Hive3ConnectionPool is similar with DBCPConnectionPool as both of them > are using DBCP BasicDataSource. However, Hive3ConnectionPool doesn't provide > some properties of what DBCPConnectionPool has. Such as "Minimum Idle > Connections", "Max Idle Connections", "Max Connection Lifetime", "Time > Between Eviction Runs", "Minimum Evictable Idle Time" and "Soft Minimum > Evictable Idle Time". > This improvement is try to provide more properties for developer to set. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6684) Add more property to Hive3ConnectionPool
[ https://issues.apache.org/jira/browse/NIFI-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16935856#comment-16935856 ] Peter Wicks commented on NIFI-6684: --- hi [~AxelSync], I have been discussing this issue outside of Jira with [~jamescheng]. We've had some issues getting the proper permissions in place to get it assigned. I've assigned it to myself in the short term until we figure out the permissions issue. I'll see if he can post an update, but I believe has completed the work. > Add more property to Hive3ConnectionPool > > > Key: NIFI-6684 > URL: https://issues.apache.org/jira/browse/NIFI-6684 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: jamescheng >Assignee: Peter Wicks >Priority: Minor > Attachments: PutHive3 enhance.png > > Time Spent: 10m > Remaining Estimate: 0h > > The Hive3ConnectionPool is similar with DBCPConnectionPool as both of them > are using DBCP BasicDataSource. However, Hive3ConnectionPool doesn't provide > some properties of what DBCPConnectionPool has. Such as "Minimum Idle > Connections", "Max Idle Connections", "Max Connection Lifetime", "Time > Between Eviction Runs", "Minimum Evictable Idle Time" and "Soft Minimum > Evictable Idle Time". > This improvement is try to provide more properties for developer to set. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-6684) Add more property to Hive3ConnectionPool
[ https://issues.apache.org/jira/browse/NIFI-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks reassigned NIFI-6684: - Assignee: Peter Wicks > Add more property to Hive3ConnectionPool > > > Key: NIFI-6684 > URL: https://issues.apache.org/jira/browse/NIFI-6684 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: jamescheng >Assignee: Peter Wicks >Priority: Minor > Attachments: PutHive3 enhance.png > > Time Spent: 10m > Remaining Estimate: 0h > > The Hive3ConnectionPool is similar with DBCPConnectionPool as both of them > are using DBCP BasicDataSource. However, Hive3ConnectionPool doesn't provide > some properties of what DBCPConnectionPool has. Such as "Minimum Idle > Connections", "Max Idle Connections", "Max Connection Lifetime", "Time > Between Eviction Runs", "Minimum Evictable Idle Time" and "Soft Minimum > Evictable Idle Time". > This improvement is try to provide more properties for developer to set. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (NIFI-6567) HandleHttpRequest does not shutdown HTTP server in some circumstances
[ https://issues.apache.org/jira/browse/NIFI-6567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-6567. --- Resolution: Fixed PR Merged and Closed. > HandleHttpRequest does not shutdown HTTP server in some circumstances > - > > Key: NIFI-6567 > URL: https://issues.apache.org/jira/browse/NIFI-6567 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > Time Spent: 1h 10m > Remaining Estimate: 0h > > (Dependent on NIFI-6562) > If multiple cluster nodes are running on a single host, then there will be > port conflicts in HandleHttpRequest. To avoid this users may choose to run > in Primary Only scheduling mode. > If there is a primary node change, the processor does not properly shutdown > the old Http server instance before the new primary node tries to startup > it's own local copy. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (NIFI-6600) Add Penalize Documentation Annotation
Peter Wicks created NIFI-6600: - Summary: Add Penalize Documentation Annotation Key: NIFI-6600 URL: https://issues.apache.org/jira/browse/NIFI-6600 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Peter Wicks Assignee: Peter Wicks Add an annotation similar to `WritesAttributes` or `CapabilityDescription` that can be used to add documentation related to when a FlowFile will be penalized. Could be expanded/addition to include functions outlined in NIFI-5670, to auto penalize based on relationship config. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (NIFI-6593) Support Custom MockProcessContext's in TestRunner
Peter Wicks created NIFI-6593: - Summary: Support Custom MockProcessContext's in TestRunner Key: NIFI-6593 URL: https://issues.apache.org/jira/browse/NIFI-6593 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Peter Wicks When building a test runner, it may be helpful to provide a custom version of the `MockProcessContext`. Propose exposing this as an option when building a new TestRunner. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (NIFI-6567) HandleHttpRequest does not shutdown HTTP server in some circumstances
Peter Wicks created NIFI-6567: - Summary: HandleHttpRequest does not shutdown HTTP server in some circumstances Key: NIFI-6567 URL: https://issues.apache.org/jira/browse/NIFI-6567 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Peter Wicks Assignee: Peter Wicks (Dependent on NIFI-6562) If multiple cluster nodes are running on a single host, then there will be port conflicts in HandleHttpRequest. To avoid this users may choose to run in Primary Only scheduling mode. If there is a primary node change, the processor does not properly shutdown the old Http server instance before the new primary node tries to startup it's own local copy. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (NIFI-6562) Expose ExecutionNode in ProcessContext
Peter Wicks created NIFI-6562: - Summary: Expose ExecutionNode in ProcessContext Key: NIFI-6562 URL: https://issues.apache.org/jira/browse/NIFI-6562 Project: Apache NiFi Issue Type: Improvement Reporter: Peter Wicks Assignee: Peter Wicks In some circumstances it can be helpful to know if a processor is schedule to run on All nodes or only the Primary node, and take appropriate action when the primary node changes based on this context. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6559) FlowFile Repo Journal Recovery Should not Fail if External Overflow Files are Missing
[ https://issues.apache.org/jira/browse/NIFI-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909243#comment-16909243 ] Peter Wicks commented on NIFI-6559: --- If I wrote an offline FlowFile Repository compaction utility, and put it in the NiFi utilities, do you think this would be an appropriate option that I could add to it there? > FlowFile Repo Journal Recovery Should not Fail if External Overflow Files are > Missing > - > > Key: NIFI-6559 > URL: https://issues.apache.org/jira/browse/NIFI-6559 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > When NiFi is journaling the FlowFile repository changes to disk it sometimes > writes Overflow files if it exceeds a certain memory threshold. > These files are tracked inside of the *.journal files as External File > References. If one of these external file references is deleted or lost the > entire journal fails to recover. > Instead, I feel this should work more like FlowFile's that lose their queue, > or Content in the Content Repository that has lost it's FlowFile. Log it, > and move on. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Comment Edited] (NIFI-6559) FlowFile Repo Journal Recovery Should not Fail if External Overflow Files are Missing
[ https://issues.apache.org/jira/browse/NIFI-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909243#comment-16909243 ] Peter Wicks edited comment on NIFI-6559 at 8/16/19 5:25 PM: If I wrote an offline FlowFile Repository checkpoint utility, and put it in the NiFi utilities, do you think this would be an appropriate option that I could add to it there? was (Author: patricker): If I wrote an offline FlowFile Repository compaction utility, and put it in the NiFi utilities, do you think this would be an appropriate option that I could add to it there? > FlowFile Repo Journal Recovery Should not Fail if External Overflow Files are > Missing > - > > Key: NIFI-6559 > URL: https://issues.apache.org/jira/browse/NIFI-6559 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > When NiFi is journaling the FlowFile repository changes to disk it sometimes > writes Overflow files if it exceeds a certain memory threshold. > These files are tracked inside of the *.journal files as External File > References. If one of these external file references is deleted or lost the > entire journal fails to recover. > Instead, I feel this should work more like FlowFile's that lose their queue, > or Content in the Content Repository that has lost it's FlowFile. Log it, > and move on. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6559) FlowFile Repo Journal Recovery Should not Fail if External Overflow Files are Missing
[ https://issues.apache.org/jira/browse/NIFI-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909158#comment-16909158 ] Peter Wicks commented on NIFI-6559: --- [~markap14] I don't disagree with any of the downsides/issues you outlined. Do you have an idea for an option to work with this though? Because right now the only option for fixing a corrupt journal is dropping the entire journal with all changes. > FlowFile Repo Journal Recovery Should not Fail if External Overflow Files are > Missing > - > > Key: NIFI-6559 > URL: https://issues.apache.org/jira/browse/NIFI-6559 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > When NiFi is journaling the FlowFile repository changes to disk it sometimes > writes Overflow files if it exceeds a certain memory threshold. > These files are tracked inside of the *.journal files as External File > References. If one of these external file references is deleted or lost the > entire journal fails to recover. > Instead, I feel this should work more like FlowFile's that lose their queue, > or Content in the Content Repository that has lost it's FlowFile. Log it, > and move on. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (NIFI-6559) FlowFile Repo Journal Recovery Should not Fail if External Overflow Files are Missing
Peter Wicks created NIFI-6559: - Summary: FlowFile Repo Journal Recovery Should not Fail if External Overflow Files are Missing Key: NIFI-6559 URL: https://issues.apache.org/jira/browse/NIFI-6559 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Peter Wicks Assignee: Peter Wicks When NiFi is journaling the FlowFile repository changes to disk it sometimes writes Overflow files if it exceeds a certain memory threshold. These files are tracked inside of the *.journal files as External File References. If one of these external file references is deleted or lost the entire journal fails to recover. Instead, I feel this should work more like FlowFile's that lose their queue, or Content in the Content Repository that has lost it's FlowFile. Log it, and move on. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (NIFI-6512) Relationship Name `ellipsis` Not Working
Peter Wicks created NIFI-6512: - Summary: Relationship Name `ellipsis` Not Working Key: NIFI-6512 URL: https://issues.apache.org/jira/browse/NIFI-6512 Project: Apache NiFi Issue Type: Bug Components: Core UI Affects Versions: 1.9.2 Reporter: Peter Wicks Assignee: Peter Wicks The relationship name calls the `ellipsis` jQuery plugin, but does not work because of a CSS style being applied to the list. This causes long relationship names to be cut off abruptly instead of having the `ellipsis` applied. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6504) Add Color to Connections
[ https://issues.apache.org/jira/browse/NIFI-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896326#comment-16896326 ] Peter Wicks commented on NIFI-6504: --- [~alopresto] I put some thought into this. My first thought was a pretty strong no. But then I thought, what if we added a default color at the relationship level in the code, or we could tag relationships with an enum like "success/failure/unmodified/default" and then put the translation into the js code. Then, in the connection creation screen the color would show up with a checkbox next to it, so you can go with the default defined for that relationship type (red/green/blue). It could be unchecked by default (if multiple relationships are selected we default to black?). This provides two things: - Existing connections remain black, keeping good backwards compat, and happy users - Users who want black don't have to do anything different than they do today. I suppose you could go the extra mile and add a nifi property to control this checkbox's default state. > Add Color to Connections > > > Key: NIFI-6504 > URL: https://issues.apache.org/jira/browse/NIFI-6504 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Core UI, Flow Versioning >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > > With so many connections overlapping on screen, it can be hard to trace where > different connections are going, and what they are used for. > Adding color to connections would make this much easier. It might be good to > add color to other components as well, such as the Funnel to help identify > their purpose using a color cue. > Example: > [https://photos.app.goo.gl/6NqZmM3PnCZRai6f9] > > POC Branch: [https://github.com/patricker/nifi/tree/NIFI-6504] -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6504) Add Color to Connections
[ https://issues.apache.org/jira/browse/NIFI-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6504: -- Description: With so many connections overlapping on screen, it can be hard to trace where different connections are going, and what they are used for. Adding color to connections would make this much easier. It might be good to add color to other components as well, such as the Funnel to help identify their purpose using a color cue. Example: [https://photos.app.goo.gl/6NqZmM3PnCZRai6f9] POC Branch: [https://github.com/patricker/nifi/tree/NIFI-6504] was: With so many connections overlapping on screen, it can be hard to trace where different connections are going, and what they are used for. Adding color to connections would make this much easier. It might be good to add color to other components as well, such as the Funnel to help identify their purpose using a color cue. Example: https://photos.app.goo.gl/6NqZmM3PnCZRai6f9 > Add Color to Connections > > > Key: NIFI-6504 > URL: https://issues.apache.org/jira/browse/NIFI-6504 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Core UI, Flow Versioning >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > > With so many connections overlapping on screen, it can be hard to trace where > different connections are going, and what they are used for. > Adding color to connections would make this much easier. It might be good to > add color to other components as well, such as the Funnel to help identify > their purpose using a color cue. > Example: > [https://photos.app.goo.gl/6NqZmM3PnCZRai6f9] > > POC Branch: [https://github.com/patricker/nifi/tree/NIFI-6504] -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (NIFI-6504) Add Color to Connections
Peter Wicks created NIFI-6504: - Summary: Add Color to Connections Key: NIFI-6504 URL: https://issues.apache.org/jira/browse/NIFI-6504 Project: Apache NiFi Issue Type: Improvement Components: Core Framework, Core UI, Flow Versioning Reporter: Peter Wicks Assignee: Peter Wicks With so many connections overlapping on screen, it can be hard to trace where different connections are going, and what they are used for. Adding color to connections would make this much easier. It might be good to add color to other components as well, such as the Funnel to help identify their purpose using a color cue. Example: https://photos.app.goo.gl/6NqZmM3PnCZRai6f9 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (NIFI-6492) Paste Location should match Right-Click location
Peter Wicks created NIFI-6492: - Summary: Paste Location should match Right-Click location Key: NIFI-6492 URL: https://issues.apache.org/jira/browse/NIFI-6492 Project: Apache NiFi Issue Type: Improvement Components: Core UI Reporter: Peter Wicks Assignee: Peter Wicks When using the mouse to copy/paste in NiFi the paste location is wherever the context menu-item "Paste" is located on screen, and not where the user right-clicked on the canvas. Change this to be where the user right-clicked instead. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (NIFI-6455) Can't see config properties that overflow a scrollable list
[ https://issues.apache.org/jira/browse/NIFI-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-6455. --- Resolution: Fixed Merged PR. > Can't see config properties that overflow a scrollable list > --- > > Key: NIFI-6455 > URL: https://issues.apache.org/jira/browse/NIFI-6455 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.10.0 >Reporter: Peter Wicks >Assignee: Robert Fellows >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > When editing properties in a scrollable property configuration window, such > as in the Processor Configure window or the Controller Service Configure > window. > If the number of properties is too large to fit in the scroll window, than > one (or more?) of the properties will be inaccessible at the bottom. The > weird thing is the scrollbar is the right size, you just can't scroll down > that far. > I used GetSolr as my test case, as it has so many properties. > I tested in 1.9.2 and was not able to reproduce the issue, but in 1.10 it > shows up. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Closed] (NIFI-6473) Check Http Context Status Processor
[ https://issues.apache.org/jira/browse/NIFI-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks closed NIFI-6473. - > Check Http Context Status Processor > --- > > Key: NIFI-6473 > URL: https://issues.apache.org/jira/browse/NIFI-6473 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > > In between a HandleHttpRequest and HandleHttpResponse a lot can happen. If > the flow is long running it would be nice to know if the HTTP Context is > still valid, or if the user terminated the connection. This is especially > useful before starting a long running process, or if a FlowFile has been > queued for a long period of time. > Suggest creating a "CheckHttpContext" Processor that will route FlowFile's to > relationships such as "Valid" if the connection is still good, or > "invalid"/"expired". -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (NIFI-6473) Check Http Context Status Processor
[ https://issues.apache.org/jira/browse/NIFI-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-6473. --- Resolution: Won't Do > Check Http Context Status Processor > --- > > Key: NIFI-6473 > URL: https://issues.apache.org/jira/browse/NIFI-6473 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > > In between a HandleHttpRequest and HandleHttpResponse a lot can happen. If > the flow is long running it would be nice to know if the HTTP Context is > still valid, or if the user terminated the connection. This is especially > useful before starting a long running process, or if a FlowFile has been > queued for a long period of time. > Suggest creating a "CheckHttpContext" Processor that will route FlowFile's to > relationships such as "Valid" if the connection is still good, or > "invalid"/"expired". -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6473) Check Http Context Status Processor
[ https://issues.apache.org/jira/browse/NIFI-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16893061#comment-16893061 ] Peter Wicks commented on NIFI-6473: --- I've decided that this is actually not possible, due to the way HTTP works. Given the lack of heart beats, and that per the HTTP spec the first line of an HTTP response MUST be the Status code. Calling flushBuffer without writing content to an open connection still sends the status code and headers, locking them in and not allowing modification. Closing as infeasible due to protocol constraints.. > Check Http Context Status Processor > --- > > Key: NIFI-6473 > URL: https://issues.apache.org/jira/browse/NIFI-6473 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > > In between a HandleHttpRequest and HandleHttpResponse a lot can happen. If > the flow is long running it would be nice to know if the HTTP Context is > still valid, or if the user terminated the connection. This is especially > useful before starting a long running process, or if a FlowFile has been > queued for a long period of time. > Suggest creating a "CheckHttpContext" Processor that will route FlowFile's to > relationships such as "Valid" if the connection is still good, or > "invalid"/"expired". -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6473) Check Http Context Status Processor
[ https://issues.apache.org/jira/browse/NIFI-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6473: -- Affects Version/s: 1.9.2 > Check Http Context Status Processor > --- > > Key: NIFI-6473 > URL: https://issues.apache.org/jira/browse/NIFI-6473 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > > In between a HandleHttpRequest and HandleHttpResponse a lot can happen. If > the flow is long running it would be nice to know if the HTTP Context is > still valid, or if the user terminated the connection. This is especially > useful before starting a long running process, or if a FlowFile has been > queued for a long period of time. > Suggest creating a "CheckHttpContext" Processor that will route FlowFile's to > relationships such as "Valid" if the connection is still good, or > "invalid"/"expired". -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (NIFI-6473) Check Http Context Status Processor
Peter Wicks created NIFI-6473: - Summary: Check Http Context Status Processor Key: NIFI-6473 URL: https://issues.apache.org/jira/browse/NIFI-6473 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Peter Wicks Assignee: Peter Wicks In between a HandleHttpRequest and HandleHttpResponse a lot can happen. If the flow is long running it would be nice to know if the HTTP Context is still valid, or if the user terminated the connection. This is especially useful before starting a long running process, or if a FlowFile has been queued for a long period of time. Suggest creating a "CheckHttpContext" Processor that will route FlowFile's to relationships such as "Valid" if the connection is still good, or "invalid"/"expired". -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6175) Spark Livy - Improving Livy
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890291#comment-16890291 ] Peter Wicks commented on NIFI-6175: --- There does not appear to be a lot of appetite for testing/merging Livy changes. I've been using Livy extensively on a project, and this PR has blossomed into a very large number of changes as time has gone on. Amazingly, I don't think there are any breaking changes, but to be honest, the built-in Livy support is so basic, it's hard to imagine people using it very usefully. I will keep updating this until my project has stabilized, and then hopefully we can find someone to merge it :). > Spark Livy - Improving Livy > --- > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > * Would like to see all available options there, with a blanket "conf" > option for users to provide custom configuration. > * When the controller service shuts down, sessions are left running, with no > option to shut them down. Add in functionality to shutdown open sessions. > * If the controller service finds no Idle Livy Sessions, it will create a > new session... until the queue runs out of resources :). Need to have a > Min/Max/Should be elastic or strict option > * When Livy starts up, it searches for existing sessions, but does not > verify that those sessions belong to it. > ** The Kerberos identity should be used to verify the identity on the > session matches the identity on the controller service. > ** Also, if a Proxy user has been specified, that should also be verified. > If no proxy user was specified, then the Proxy user on the Livy session > should match the Kerberos identity. > * The initialization of the SSL Context is not implemented in a thread safe > way. This leads to exceptions when multiple threads are running against the > same Controller Service. > ** SSL Context init should be made thread safe. > * There is a bug in Livy that causes running sessions to be killed if they > run longer than the timeout value: > https://issues.apache.org/jira/browse/LIVY-547. > ** The processor should support the work around described in the discussion, > by pinging the session to record activity on sessions to keep them alive. > [https://github.com/apache/incubator-livy/pull/138#issuecomment-455352091] > Livy should also support Batch mode. > * Include a controller service to re-use configs, but controller service is > basically just a config holder > * Processor named `ExecuteSparkBatch`. This is harder than Session because > Batch mode only supports code submission through a file path. So users will > need to upload to HDFS first. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6175) Spark Livy - Improving Livy
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6175: -- Description: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) * Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. * When the controller service shuts down, sessions are left running, with no option to shut them down. Add in functionality to shutdown open sessions. * If the controller service finds no Idle Livy Sessions, it will create a new session... until the queue runs out of resources :). Need to have a Min/Max/Should be elastic or strict option * When Livy starts up, it searches for existing sessions, but does not verify that those sessions belong to it. ** The Kerberos identity should be used to verify the identity on the session matches the identity on the controller service. ** Also, if a Proxy user has been specified, that should also be verified. If no proxy user was specified, then the Proxy user on the Livy session should match the Kerberos identity. * The initialization of the SSL Context is not implemented in a thread safe way. This leads to exceptions when multiple threads are running against the same Controller Service. ** SSL Context init should be made thread safe. * There is a bug in Livy that causes running sessions to be killed if they run longer than the timeout value: https://issues.apache.org/jira/browse/LIVY-547. ** The processor should support the work around described in the discussion, by pinging the session to record activity on sessions to keep them alive. [https://github.com/apache/incubator-livy/pull/138#issuecomment-455352091] Livy should also support Batch mode. * Include a controller service to re-use configs, but controller service is basically just a config holder * Processor named `ExecuteSparkBatch`. This is harder than Session because Batch mode only supports code submission through a file path. So users will need to upload to HDFS first. was: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) * Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. * When the controller service shuts down, sessions are left running, with no option to shut them down. Add in functionality to shutdown open sessions. * If the controller service finds no Idle Livy Sessions, it will create a new session... until the queue runs out of resources :). Need to have a Min/Max/Should be elastic or strict option * When Livy starts up, it searches for existing sessions, but does not verify that those sessions belong to it. ** The Kerberos identity should be used to verify the identity on the session matches the identity on the controller service. ** Also, if a Proxy user has been specified, that should also be verified. If no proxy user was specified, then the Proxy user on the Livy session should match the Kerberos identity. * The initialization of the SSL Context is not implemented in a thread safe way. This leads to exceptions when multiple threads are running against the same Controller Service. ** SSL Context init should be made thread safe. Livy should also support Batch mode. * Include a controller service to re-use configs, but controller service is basically just a config holder * Processor named `ExecuteSparkBatch`. This is harder than Session because Batch mode only supports code submission through a file path. So users will need to upload to HDFS first. > Spark Livy - Improving Livy > --- > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > * Would like to see all available options there, with a blanket "conf" > option for users to provide custom configuration. > * When the controller service shuts down, sessions are left running, with no > option to shut them down. Add in functionality to shutdown open sessions. > * If the controller service finds no Idle Livy Sessions, it will create a > new session... until the queue runs out of resources :). Need to have a > Min/Max/Should be elastic or strict option > * When Livy starts up, it searches for existing
[jira] [Updated] (NIFI-6175) Spark Livy - Improving Livy
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6175: -- Summary: Spark Livy - Improving Livy (was: Spark Livy - Add Support for All Missing Session Features) > Spark Livy - Improving Livy > --- > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > * Would like to see all available options there, with a blanket "conf" > option for users to provide custom configuration. > * When the controller service shuts down, sessions are left running, with no > option to shut them down. Add in functionality to shutdown open sessions. > * If the controller service finds no Idle Livy Sessions, it will create a > new session... until the queue runs out of resources :). Need to have a > Min/Max/Should be elastic or strict option > * When Livy starts up, it searches for existing sessions, but does not > verify that those sessions belong to it. > ** The Kerberos identity should be used to verify the identity on the > session matches the identity on the controller service. > ** Also, if a Proxy user has been specified, that should also be verified. > If no proxy user was specified, then the Proxy user on the Livy session > should match the Kerberos identity. > * The initialization of the SSL Context is not implemented in a thread safe > way. This leads to exceptions when multiple threads are running against the > same Controller Service. > ** SSL Context init should be made thread safe. > > Livy should also support Batch mode. > * Include a controller service to re-use configs, but controller service is > basically just a config holder > * Processor named `ExecuteSparkBatch`. This is harder than Session because > Batch mode only supports code submission through a file path. So users will > need to upload to HDFS first. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6175) Spark Livy - Add Support for All Missing Session Features
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6175: -- Description: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) * Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. * When the controller service shuts down, sessions are left running, with no option to shut them down. Add in functionality to shutdown open sessions. * If the controller service finds no Idle Livy Sessions, it will create a new session... until the queue runs out of resources :). Need to have a Min/Max/Should be elastic or strict option * When Livy starts up, it searches for existing sessions, but does not verify that those sessions belong to it. ** The Kerberos identity should be used to verify the identity on the session matches the identity on the controller service. ** Also, if a Proxy user has been specified, that should also be verified. If no proxy user was specified, then the Proxy user on the Livy session should match the Kerberos identity. * The initialization of the SSL Context is not implemented in a thread safe way. This leads to exceptions when multiple threads are running against the same Controller Service. ** SSL Context init should be made thread safe. Livy should also support Batch mode. * Include a controller service to re-use configs, but controller service is basically just a config holder * Processor named `ExecuteSparkBatch`. This is harder than Session because Batch mode only supports code submission through a file path. So users will need to upload to HDFS first. was: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) * Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. * When the controller service shuts down, sessions are left running, with no option to shut them down. Add in functionality to shutdown open sessions. * If the controller service finds no Idle Livy Sessions, it will create a new session... until the queue runs out of resources :). Need to have a Min/Max/Should be elastic or strict option * When Livy starts up, it searches for existing sessions, but does not verify that those sessions belong to it. ** The Kerberos identity should be used to verify the identity on the session matches the identity on the controller service. ** Also, if a Proxy user has been specified, that should also be verified. If no proxy user was specified, then the Proxy user on the Livy session should match the Kerberos identity. * The initialization of the SSL Context is not implemented in a thread safe way. This leads to exceptions when multiple threads are running against the same Controller Service. ** SSL Context init should be made thread safe. > Spark Livy - Add Support for All Missing Session Features > - > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > * Would like to see all available options there, with a blanket "conf" > option for users to provide custom configuration. > * When the controller service shuts down, sessions are left running, with no > option to shut them down. Add in functionality to shutdown open sessions. > * If the controller service finds no Idle Livy Sessions, it will create a > new session... until the queue runs out of resources :). Need to have a > Min/Max/Should be elastic or strict option > * When Livy starts up, it searches for existing sessions, but does not > verify that those sessions belong to it. > ** The Kerberos identity should be used to verify the identity on the > session matches the identity on the controller service. > ** Also, if a Proxy user has been specified, that should also be verified. > If no proxy user was specified, then the Proxy user on the Livy session > should match the Kerberos identity. > * The initialization of the SSL Context is not implemented in a thread safe > way. This leads to exceptions when multiple threads are running against the > same Controller Service. > ** SSL Context init should be made thread safe. > > Livy should also support B
[jira] [Commented] (NIFI-6455) Can't see config properties that overflow a scrollable list
[ https://issues.apache.org/jira/browse/NIFI-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888288#comment-16888288 ] Peter Wicks commented on NIFI-6455: --- Thanks for taking a look at this [~rfellows] > Can't see config properties that overflow a scrollable list > --- > > Key: NIFI-6455 > URL: https://issues.apache.org/jira/browse/NIFI-6455 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.10.0 >Reporter: Peter Wicks >Assignee: Robert Fellows >Priority: Major > > When editing properties in a scrollable property configuration window, such > as in the Processor Configure window or the Controller Service Configure > window. > If the number of properties is too large to fit in the scroll window, than > one (or more?) of the properties will be inaccessible at the bottom. The > weird thing is the scrollbar is the right size, you just can't scroll down > that far. > I used GetSolr as my test case, as it has so many properties. > I tested in 1.9.2 and was not able to reproduce the issue, but in 1.10 it > shows up. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6455) Can't see config properties that overflow a scrollable list
[ https://issues.apache.org/jira/browse/NIFI-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6455: -- Description: When editing properties in a scrollable property configuration window, such as in the Processor Configure window or the Controller Service Configure window. If the number of properties is too large to fit in the scroll window, than one (or more?) of the properties will be inaccessible at the bottom. The weird thing is the scrollbar is the right size, you just can't scroll down that far. I used GetSolr as my test case, as it has so many properties. I tested in 1.9.2 and was not able to reproduce the issue, but in 1.10 it shows up. was: When editing properties in a scrollable property configuration window, such as in the Processor Configure window or the Controller Service Configure window. If the number of properties is too large to fit in the scroll window, than one (or more?) of the properties will be inaccessible at the bottom. The weird thing is the scrollbar is the right size, you just can't scroll down that far. I tested in 1.9.2 and was not able to reproduce the issue, but in 1.10 it shows up. > Can't see config properties that overflow a scrollable list > --- > > Key: NIFI-6455 > URL: https://issues.apache.org/jira/browse/NIFI-6455 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.10.0 >Reporter: Peter Wicks >Priority: Major > > When editing properties in a scrollable property configuration window, such > as in the Processor Configure window or the Controller Service Configure > window. > If the number of properties is too large to fit in the scroll window, than > one (or more?) of the properties will be inaccessible at the bottom. The > weird thing is the scrollbar is the right size, you just can't scroll down > that far. > I used GetSolr as my test case, as it has so many properties. > I tested in 1.9.2 and was not able to reproduce the issue, but in 1.10 it > shows up. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (NIFI-6455) Can't see config properties that overflow a scrollable list
Peter Wicks created NIFI-6455: - Summary: Can't see config properties that overflow a scrollable list Key: NIFI-6455 URL: https://issues.apache.org/jira/browse/NIFI-6455 Project: Apache NiFi Issue Type: Bug Components: Core UI Affects Versions: 1.10.0 Reporter: Peter Wicks When editing properties in a scrollable property configuration window, such as in the Processor Configure window or the Controller Service Configure window. If the number of properties is too large to fit in the scroll window, than one (or more?) of the properties will be inaccessible at the bottom. The weird thing is the scrollbar is the right size, you just can't scroll down that far. I tested in 1.9.2 and was not able to reproduce the issue, but in 1.10 it shows up. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-6439. --- Resolution: Fixed > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build >Affects Versions: 1.10.0 > Environment: centos 7 > mvn 3.6.0 > java 1.8.0_212 openjdk >Reporter: Zinan Ma >Assignee: Peter Wicks >Priority: Critical > Labels: build > Fix For: 1.10.0 > > Attachments: image-2019-07-15-08-21-51-633.png, > image-2019-07-16-09-09-36-005.png, image-2019-07-18-09-16-16-488.png > > Original Estimate: 96h > Time Spent: 0.5h > Remaining Estimate: 95.5h > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49-56-584.png! > > Now I am really stuck at this stage. Any help would be greatly appreciated! > Please let me know if you need additional information! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888219#comment-16888219 ] Peter Wicks commented on NIFI-6439: --- [~dsargrad] Give Andy a few minutes :D. Actually, the NiFi build is so big, that my Pull Request hasn't even finished it's automated build and test yet. > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build >Affects Versions: 1.10.0 > Environment: centos 7 > mvn 3.6.0 > java 1.8.0_212 openjdk >Reporter: Zinan Ma >Assignee: Peter Wicks >Priority: Blocker > Labels: build > Fix For: 1.10.0 > > Attachments: image-2019-07-15-08-21-51-633.png, > image-2019-07-16-09-09-36-005.png, image-2019-07-18-09-16-16-488.png > > Original Estimate: 96h > Time Spent: 10m > Remaining Estimate: 95h 50m > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49-56-584.png! > > Now I am really stuck at this stage. Any help would be greatly appreciated! > Please let me know if you need additional information! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6439: -- Affects Version/s: 1.10.0 > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build >Affects Versions: 1.10.0 > Environment: centos 7 > mvn 3.6.0 > java 1.8.0_212 openjdk >Reporter: Zinan Ma >Assignee: Peter Wicks >Priority: Blocker > Labels: build > Fix For: 1.10.0 > > Attachments: image-2019-07-15-08-21-51-633.png, > image-2019-07-16-09-09-36-005.png, image-2019-07-18-09-16-16-488.png > > Original Estimate: 96h > Time Spent: 10m > Remaining Estimate: 95h 50m > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49-56-584.png! > > Now I am really stuck at this stage. Any help would be greatly appreciated! > Please let me know if you need additional information! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888196#comment-16888196 ] Peter Wicks commented on NIFI-6439: --- I've confirmed that in the current source code, Spring has updated it to include https. This was added on May 7, 2019, so it's a pretty recent change, we'd need a very new version. [https://github.com/spring-projects/spring-framework/blob/master/spring-beans/src/main/resources/META-INF/spring.schemas] PR submitted. > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build > Environment: centos 7 > mvn 3.6.0 > java 1.8.0_212 openjdk >Reporter: Zinan Ma >Assignee: Peter Wicks >Priority: Blocker > Labels: build > Fix For: 1.10.0 > > Attachments: image-2019-07-15-08-21-51-633.png, > image-2019-07-16-09-09-36-005.png, image-2019-07-18-09-16-16-488.png > > Original Estimate: 96h > Time Spent: 10m > Remaining Estimate: 95h 50m > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49-56-584.png! > > Now I am really stuck at this stage. Any help would be greatly appreciated! > Please let me know if you need additional information! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks reassigned NIFI-6439: - Assignee: Peter Wicks > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build > Environment: centos 7 > mvn 3.6.0 > java 1.8.0_212 openjdk >Reporter: Zinan Ma >Assignee: Peter Wicks >Priority: Blocker > Labels: build > Fix For: 1.10.0 > > Attachments: image-2019-07-15-08-21-51-633.png, > image-2019-07-16-09-09-36-005.png, image-2019-07-18-09-16-16-488.png > > Original Estimate: 96h > Remaining Estimate: 96h > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49-56-584.png! > > Now I am really stuck at this stage. Any help would be greatly appreciated! > Please let me know if you need additional information! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Comment Edited] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888155#comment-16888155 ] Peter Wicks edited comment on NIFI-6439 at 7/18/19 4:54 PM: [~alopresto] I just completed testing this, and my test was a success. I changed all the xml files that contained this change and rebuilt. Here is my commit showing the changes that I tested with. [https://github.com/patricker/nifi/commit/e4361a660b7ee1aec4306eaf9c38a20eb040ec1b] was (Author: patricker): [~alopresto] I just completed testing this, and my test was a success. I did not manually change nifi-context.xml, but instead changed the xml files that contained this change and rebuilt form scratch. Here is my commit showing the changes that I tested with. [https://github.com/patricker/nifi/commit/e4361a660b7ee1aec4306eaf9c38a20eb040ec1b] > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build > Environment: centos 7 > mvn 3.6.0 > java 1.8.0_212 openjdk >Reporter: Zinan Ma >Priority: Blocker > Labels: build > Fix For: 1.10.0 > > Attachments: image-2019-07-15-08-21-51-633.png, > image-2019-07-16-09-09-36-005.png, image-2019-07-18-09-16-16-488.png > > Original Estimate: 96h > Remaining Estimate: 96h > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49-56-584.png! > > Now I am really stuck at this stage. Any help would be greatly appreciated! > Please let me know if you need additional information! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888155#comment-16888155 ] Peter Wicks commented on NIFI-6439: --- [~alopresto] I just completed testing this, and my test was a success. I did not manually change nifi-context.xml, but instead changed the xml files that contained this change and rebuilt form scratch. Here is my commit showing the changes that I tested with. [https://github.com/patricker/nifi/commit/e4361a660b7ee1aec4306eaf9c38a20eb040ec1b] > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build > Environment: centos 7 > mvn 3.6.0 > java 1.8.0_212 openjdk >Reporter: Zinan Ma >Priority: Blocker > Labels: build > Fix For: 1.10.0 > > Attachments: image-2019-07-15-08-21-51-633.png, > image-2019-07-16-09-09-36-005.png, image-2019-07-18-09-16-16-488.png > > Original Estimate: 96h > Remaining Estimate: 96h > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49-56-584.png! > > Now I am really stuck at this stage. Any help would be greatly appreciated! > Please let me know if you need additional information! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Comment Edited] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888121#comment-16888121 ] Peter Wicks edited comment on NIFI-6439 at 7/18/19 4:22 PM: [~alopresto], I tracked down why the change to "https" causes it to grab a remote copy. It turns out that leaving it as "http" was not actually using an outbound connection. Inside of the spring-beans jar file, in META-INF, they have a translation file that it uses first, before reaching out for remote connections. For example, for our Spring Beans 3.1 reference, if would auto map it to the local one, if you use "http". But the mapping fails for "https", and looks for an external resource instead. So putting https in this case... made it less secure :). Example Mapping (colon is escaped, because this is a Java Properties file, as discussed in the reference link at the bottom): http\://[www.springframework.org/schema/beans/spring-beans-3.1.xsd=org/springframework/beans/factory/xml/spring-beans.xsd|http://www.springframework.org/schema/beans/spring-beans-3.1.xsd=org/springframework/beans/factory/xml/spring-beans.xsd] So all we really need to do is change them back to `http`. [https://docs.spring.io/spring/docs/3.0.0.M3/reference/html/apbs05.html] was (Author: patricker): [~alopresto], I tracked down why the change to "https" causes it to grab a remote copy. It turns out that leaving it as "http" was not actually using an outbound connection. Inside of the spring-beans jar file, in META-INF, they have a translation file that it uses first, before reaching out for remote connections. For example, for our Spring Beans 3.1 reference, if would auto map it to the local one, if you use "http". But the mapping fails for "https", and looks for an external resource instead. So putting https in this case... made it less secure :). Example Mapping (colon is escaped, because this is a Java Properties file, as discussed in the reference link at the bottom): http\://www.springframework.org/schema/beans/spring-beans-3.1.xsd=org/springframework/beans/factory/xml/spring-beans.xsd [https://docs.spring.io/spring/docs/3.0.0.M3/reference/html/apbs05.html] > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build > Environment: centos 7 > mvn 3.6.0 > java 1.8.0_212 openjdk >Reporter: Zinan Ma >Priority: Major > Labels: build > Attachments: image-2019-07-15-08-21-51-633.png, > image-2019-07-16-09-09-36-005.png, image-2019-07-18-09-16-16-488.png > > Original Estimate: 96h > Remaining Estimate: 96h > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49
[jira] [Comment Edited] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888121#comment-16888121 ] Peter Wicks edited comment on NIFI-6439 at 7/18/19 4:21 PM: [~alopresto], I tracked down why the change to "https" causes it to grab a remote copy. It turns out that leaving it as "http" was not actually using an outbound connection. Inside of the spring-beans jar file, in META-INF, they have a translation file that it uses first, before reaching out for remote connections. For example, for our Spring Beans 3.1 reference, if would auto map it to the local one, if you use "http". But the mapping fails for "https", and looks for an external resource instead. So putting https in this case... made it less secure :). Example Mapping (colon is escaped, because this is a Java Properties file, as discussed in the reference link at the bottom): http\://www.springframework.org/schema/beans/spring-beans-3.1.xsd=org/springframework/beans/factory/xml/spring-beans.xsd [https://docs.spring.io/spring/docs/3.0.0.M3/reference/html/apbs05.html] was (Author: patricker): [~alopresto], I tracked down why the change to "https" causes it to grab a remote copy. It turns out that leaving it as "http" was not actually using an outbound connection. Inside of the spring-beans jar file, in META-INF, they have a translation file that it uses first, before reaching out for remote connections. For example, for our Spring Beans 3.1 reference, if would auto map it to the local one, if you use "http". But the mapping fails for "https", and looks for an external resource instead. So putting https in this case... made it less secure :). http\://www.springframework.org/schema/beans/spring-beans-3.1.xsd=org/springframework/beans/factory/xml/spring-beans.xsd [https://docs.spring.io/spring/docs/3.0.0.M3/reference/html/apbs05.html] > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build > Environment: centos 7 > mvn 3.6.0 > java 1.8.0_212 openjdk >Reporter: Zinan Ma >Priority: Major > Labels: build > Attachments: image-2019-07-15-08-21-51-633.png, > image-2019-07-16-09-09-36-005.png, image-2019-07-18-09-16-16-488.png > > Original Estimate: 96h > Remaining Estimate: 96h > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49-56-584.png! > > Now I am really stuck at this stage. Any help would be greatly appreciated! > Please let me know if you need additional information! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888121#comment-16888121 ] Peter Wicks commented on NIFI-6439: --- [~alopresto], I tracked down why the change to "https" causes it to grab a remote copy. It turns out that leaving it as "http" was not actually using an outbound connection. Inside of the spring-beans jar file, in META-INF, they have a translation file that it uses first, before reaching out for remote connections. For example, for our Spring Beans 3.1 reference, if would auto map it to the local one, if you use "http". But the mapping fails for "https", and looks for an external resource instead. So putting https in this case... made it less secure :). http\://www.springframework.org/schema/beans/spring-beans-3.1.xsd=org/springframework/beans/factory/xml/spring-beans.xsd [https://docs.spring.io/spring/docs/3.0.0.M3/reference/html/apbs05.html] > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build > Environment: centos 7 > mvn 3.6.0 > java 1.8.0_212 openjdk >Reporter: Zinan Ma >Priority: Major > Labels: build > Attachments: image-2019-07-15-08-21-51-633.png, > image-2019-07-16-09-09-36-005.png, image-2019-07-18-09-16-16-488.png > > Original Estimate: 96h > Remaining Estimate: 96h > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49-56-584.png! > > Now I am really stuck at this stage. Any help would be greatly appreciated! > Please let me know if you need additional information! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (NIFI-6439) web.context.ContextLoader Context initialization failed
[ https://issues.apache.org/jira/browse/NIFI-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888107#comment-16888107 ] Peter Wicks commented on NIFI-6439: --- [~alopresto] This is direction I think it makes sense to go. Even without the issue in this Jira ticket, you are basically saying that NiFi must have internet access to startup? I don't think that is a good direction. The Ignite client does it the way you described, and uses the path, "http://www.springframework.org/schema/beans/spring-beans.xsd";. The post is a bit old, but it is recommended to use the version-less package name, so it will pull the correct one from the JAR automatically: https://stackoverflow.com/a/20900801/328968 > web.context.ContextLoader Context initialization failed > --- > > Key: NIFI-6439 > URL: https://issues.apache.org/jira/browse/NIFI-6439 > Project: Apache NiFi > Issue Type: Bug > Components: Configuration, Tools and Build > Environment: centos 7 > mvn 3.6.0 > java 1.8.0_212 openjdk >Reporter: Zinan Ma >Priority: Major > Labels: build > Attachments: image-2019-07-15-08-21-51-633.png, > image-2019-07-16-09-09-36-005.png, image-2019-07-18-09-16-16-488.png > > Original Estimate: 96h > Remaining Estimate: 96h > > Hi NIFI team, > I have been trying to run NIFI in a local debugging environment by following > this [tutorial|[https://nifi.apache.org/quickstart.html]] > When I do mvn -T C2.0 clean install, The test cases failed.(some spring > context test case) I then did a mvn clean and > mvn install -DskipTests > I successfully build it but then when I run ./nifi.sh start, Nifi could not > start so I check the nifi-app.log and here is the first error: > {color:#d04437} 2019-07-12 09:37:53,881 INFO [main] > o.e.j.s.handler.ContextHandler._nifi_api Initializing Spring root > WebApplicationContext{color} > {color:#d04437}2019-07-12 09:40:01,659 ERROR [main] > o.s.web.context.ContextLoader Context initialization failed{color} > {color:#d04437}org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: > Line 19 in XML document from class path resource [nifi-context.xml] is > invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 19; > columnNumber: 139; cvc-elt.1: Cannot find the declaration of element > 'beans'.{color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:399){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336){color} > {color:#d04437} at > org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217){color} > {color:#d04437} at > org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188){color} > !image-2019-07-12-16-49-56-584.png! > > Now I am really stuck at this stage. Any help would be greatly appreciated! > Please let me know if you need additional information! -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (NIFI-6175) Spark Livy - Add Support for All Missing Session Features
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6175: -- Affects Version/s: (was: 1.9.1) 1.9.2 > Spark Livy - Add Support for All Missing Session Features > - > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > * Would like to see all available options there, with a blanket "conf" > option for users to provide custom configuration. > * When the controller service shuts down, sessions are left running, with no > option to shut them down. Add in functionality to shutdown open sessions. > * If the controller service finds no Idle Livy Sessions, it will create a > new session... until the queue runs out of resources :). Need to have a > Min/Max/Should be elastic or strict option > * When Livy starts up, it searches for existing sessions, but does not > verify that those sessions belong to it. > ** The Kerberos identity should be used to verify the identity on the > session matches the identity on the controller service. > ** Also, if a Proxy user has been specified, that should also be verified. > If no proxy user was specified, then the Proxy user on the Livy session > should match the Kerberos identity. > * The initialization of the SSL Context is not implemented in a thread safe > way. This leads to exceptions when multiple threads are running against the > same Controller Service. > ** SSL Context init should be made thread safe. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6175) Spark Livy - Add Support for All Missing Session Features
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6175: -- Description: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) * Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. * When the controller service shuts down, sessions are left running, with no option to shut them down. Add in functionality to shutdown open sessions. * If the controller service finds no Idle Livy Sessions, it will create a new session... until the queue runs out of resources :). Need to have a Min/Max/Should be elastic or strict option * When Livy starts up, it searches for existing sessions, but does not verify that those sessions belong to it. ** The Kerberos identity should be used to verify the identity on the session matches the identity on the controller service. ** Also, if a Proxy user has been specified, that should also be verified. If no proxy user was specified, then the Proxy user on the Livy session should match the Kerberos identity. * The initialization of the SSL Context is not implemented in a thread safe way. This leads to exceptions when multiple threads are running against the same Controller Service. ** SSL Context init should be made thread safe. was: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) * Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. * When the controller service shuts down, sessions are left running, with no option to shut them down. Add in functionality to shutdown open sessions. * If the controller service finds no Idle Livy Sessions, it will create a new session... until the queue runs out of resources :). Need to have a Min/Max/Should be elastic or strict option * When Livy starts up, it searches for existing sessions, but does not verify that those sessions belong to it. ** The Kerberos identity should be used to verify the identity on the session matches the identity on the controller service. ** Also, if a Proxy user has been specified, that should also be verified. If no proxy user was specified, then the Proxy user on the Livy session should match the Kerberos identity. > Spark Livy - Add Support for All Missing Session Features > - > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.9.1 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > * Would like to see all available options there, with a blanket "conf" > option for users to provide custom configuration. > * When the controller service shuts down, sessions are left running, with no > option to shut them down. Add in functionality to shutdown open sessions. > * If the controller service finds no Idle Livy Sessions, it will create a > new session... until the queue runs out of resources :). Need to have a > Min/Max/Should be elastic or strict option > * When Livy starts up, it searches for existing sessions, but does not > verify that those sessions belong to it. > ** The Kerberos identity should be used to verify the identity on the > session matches the identity on the controller service. > ** Also, if a Proxy user has been specified, that should also be verified. > If no proxy user was specified, then the Proxy user on the Livy session > should match the Kerberos identity. > * The initialization of the SSL Context is not implemented in a thread safe > way. This leads to exceptions when multiple threads are running against the > same Controller Service. > ** SSL Context init should be made thread safe. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6386) Allow Processors to Process Terminate Events
Peter Wicks created NIFI-6386: - Summary: Allow Processors to Process Terminate Events Key: NIFI-6386 URL: https://issues.apache.org/jira/browse/NIFI-6386 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Affects Versions: 1.9.2 Reporter: Peter Wicks When a processor is Terminated, it's thread is interrupted. But the processor is never given a chance to gracefully terminate. Sometimes this means that external connections, such as database or external process calls, will not allow the processor to terminate, and it hangs. Some processors, which need it, should be given a short period of time during which to kill internal connections, such as calling `statement.cancel()` for JDBC calls. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6175) Spark Livy - Add Support for All Missing Session Features
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6175: -- Description: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) * Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. * When the controller service shuts down, sessions are left running, with no option to shut them down. Add in functionality to shutdown open sessions. * If the controller service finds no Idle Livy Sessions, it will create a new session... until the queue runs out of resources :). Need to have a Min/Max/Should be elastic or strict option * When Livy starts up, it searches for existing sessions, but does not verify that those sessions belong to it. ** The Kerberos identity should be used to verify the identity on the session matches the identity on the controller service. ** Also, if a Proxy user has been specified, that should also be verified. If no proxy user was specified, then the Proxy user on the Livy session should match the Kerberos identity. was: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. When the controller service shuts down, sessions are left running, with no option to shut them down. Add in functionality to shutdown open sessions. When Livy starts up, it searches for existing sessions, but does not verify that those sessions belong to it. The Kerberos identity should be used to verify the identity on the session matches the identity on the controller service. > Spark Livy - Add Support for All Missing Session Features > - > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.9.1 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > * Would like to see all available options there, with a blanket "conf" > option for users to provide custom configuration. > * When the controller service shuts down, sessions are left running, with no > option to shut them down. Add in functionality to shutdown open sessions. > * If the controller service finds no Idle Livy Sessions, it will create a > new session... until the queue runs out of resources :). Need to have a > Min/Max/Should be elastic or strict option > * When Livy starts up, it searches for existing sessions, but does not > verify that those sessions belong to it. > ** The Kerberos identity should be used to verify the identity on the > session matches the identity on the controller service. > ** Also, if a Proxy user has been specified, that should also be verified. > If no proxy user was specified, then the Proxy user on the Livy session > should match the Kerberos identity. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6356) Refactor UpdateAttribute
Peter Wicks created NIFI-6356: - Summary: Refactor UpdateAttribute Key: NIFI-6356 URL: https://issues.apache.org/jira/browse/NIFI-6356 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Peter Wicks Assignee: Peter Wicks While working on NIFI-6344 I made several nice refactors to UpdateAttribute. It looks like this alternate PR is not going anywhere, so moving the refactors that are not directly related to NIFI-6344 to an alternate PR. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-1321) Support appending files in PutFile
[ https://issues.apache.org/jira/browse/NIFI-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-1321: -- Affects Version/s: (was: 0.4.0) 1.9.2 > Support appending files in PutFile > -- > > Key: NIFI-1321 > URL: https://issues.apache.org/jira/browse/NIFI-1321 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.9.2 >Reporter: Scott >Priority: Major > Attachments: putfile_append.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-6339) ListHDFS processor will list files without using previous state when cluster startup
[ https://issues.apache.org/jira/browse/NIFI-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16856822#comment-16856822 ] Peter Wicks commented on NIFI-6339: --- [~markap14]/[~bende], I was going to review this, but am not comfortable enough with the context of what this code does. Would appreciate your assistance. > ListHDFS processor will list files without using previous state when cluster > startup > > > Key: NIFI-6339 > URL: https://issues.apache.org/jira/browse/NIFI-6339 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.0, 1.9.1, 1.9.2 >Reporter: Hsin-Ying Lee >Priority: Major > Attachments: NIFI-6339.v0.patch > > > When the node startup, NiFi will create processor and load the property & > state. > But ListHDFS will ignore the previous listed-state stored on Zookeeper, and > relist all file again. > > And I found when we call the function setProperty, we only check the value > between default and newValue. If oldValue is same with newValue, it'll also > trigger function onPropertyModified. > It casue ListHDFS loaclVariable: resetState to be true. When ListHDFS be > triggered, it'll clear the state empty, and relist all the files in the > directory. > > {code:java} > // AbstractComponentNode.java > if (!value.equals(propertyModComparisonValue)) { > try { > onPropertyModified(descriptor, oldValue, value); > } catch (final Exception e) { > // nothing really to do here... > } > } > {code} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6350) Flow Controller Startup Error Misleading
Peter Wicks created NIFI-6350: - Summary: Flow Controller Startup Error Misleading Key: NIFI-6350 URL: https://issues.apache.org/jira/browse/NIFI-6350 Project: Apache NiFi Issue Type: Improvement Reporter: Peter Wicks Assignee: Peter Wicks While NiFi is part of a cluster and is initializing the Flow, it displays a very misleading status message, "Cluster is still in the process of voting on the appropriate Data Flow." This message is shown until all processor groups have been synchronized with the registry/etc... Should be more generic, like, "NiFi is still initializing the Data Flow". -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6344) Add Failure Relationship to UpdateAttribute
Peter Wicks created NIFI-6344: - Summary: Add Failure Relationship to UpdateAttribute Key: NIFI-6344 URL: https://issues.apache.org/jira/browse/NIFI-6344 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Peter Wicks Assignee: Peter Wicks EL makes it possible for an UpdateAttribute processor to fail. When this happens the FlowFile is rolled back, and there is no way to route it to handle the failure automatically. Considerations: UpdateAttribute is used in probably all but the simplest of flows, thus any change made to support a failure relationship must be handled delicately. The goal of this change is for users to have no change in functionality unless they specifically configure it. Proposal: It was proposed on the Slack channel to create the failure relationship, but default it to auto-terminate. This is a good start, but without further work would result in a change in functionality. I propose that we will default to auto-terminate, but also detect this behavior in the code. If the Failure relationship is set to auto-terminate then we will rollback the transaction. The only downside I see with this is you can't actually auto-terminate Failures without the addition of another property, such as Failure Behavior: Route to Failure and Rollback options. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6339) ListHDFS processor will list files without using previous state when cluster startup
[ https://issues.apache.org/jira/browse/NIFI-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6339: -- Fix Version/s: (was: 1.9.2) (was: 1.9.1) (was: 1.9.0) > ListHDFS processor will list files without using previous state when cluster > startup > > > Key: NIFI-6339 > URL: https://issues.apache.org/jira/browse/NIFI-6339 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.0, 1.9.1, 1.9.2 >Reporter: Hsin-Ying Lee >Priority: Major > Attachments: NIFI-6339.v0.patch > > > When the node startup, NiFi will create processor and load the property & > state. > But ListHDFS will ignore the previous listed-state stored on Zookeeper, and > relist all file again. > > And I found when we call the function setProperty, we only check the value > between default and newValue. If oldValue is same with newValue, it'll also > trigger function onPropertyModified. > It casue ListHDFS loaclVariable: resetState to be true. When ListHDFS be > triggered, it'll clear the state empty, and relist all the files in the > directory. > > {code:java} > // AbstractComponentNode.java > if (!value.equals(propertyModComparisonValue)) { > try { > onPropertyModified(descriptor, oldValue, value); > } catch (final Exception e) { > // nothing really to do here... > } > } > {code} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-6339) ListHDFS processor will list files without using previous state when cluster startup
[ https://issues.apache.org/jira/browse/NIFI-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks reassigned NIFI-6339: - Assignee: Peter Wicks > ListHDFS processor will list files without using previous state when cluster > startup > > > Key: NIFI-6339 > URL: https://issues.apache.org/jira/browse/NIFI-6339 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.0, 1.9.1, 1.9.2 >Reporter: Hsin-Ying Lee >Assignee: Peter Wicks >Priority: Major > Fix For: 1.9.0, 1.9.1, 1.9.2 > > Attachments: NIFI-6339.v0.patch > > > When the node startup, NiFi will create processor and load the property & > state. > But ListHDFS will ignore the previous listed-state stored on Zookeeper, and > relist all file again. > > And I found when we call the function setProperty, we only check the value > between default and newValue. If oldValue is same with newValue, it'll also > trigger function onPropertyModified. > It casue ListHDFS loaclVariable: resetState to be true. When ListHDFS be > triggered, it'll clear the state empty, and relist all the files in the > directory. > > {code:java} > // AbstractComponentNode.java > if (!value.equals(propertyModComparisonValue)) { > try { > onPropertyModified(descriptor, oldValue, value); > } catch (final Exception e) { > // nothing really to do here... > } > } > {code} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-6339) ListHDFS processor will list files without using previous state when cluster startup
[ https://issues.apache.org/jira/browse/NIFI-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks reassigned NIFI-6339: - Assignee: (was: Peter Wicks) > ListHDFS processor will list files without using previous state when cluster > startup > > > Key: NIFI-6339 > URL: https://issues.apache.org/jira/browse/NIFI-6339 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.9.0, 1.9.1, 1.9.2 >Reporter: Hsin-Ying Lee >Priority: Major > Fix For: 1.9.0, 1.9.1, 1.9.2 > > Attachments: NIFI-6339.v0.patch > > > When the node startup, NiFi will create processor and load the property & > state. > But ListHDFS will ignore the previous listed-state stored on Zookeeper, and > relist all file again. > > And I found when we call the function setProperty, we only check the value > between default and newValue. If oldValue is same with newValue, it'll also > trigger function onPropertyModified. > It casue ListHDFS loaclVariable: resetState to be true. When ListHDFS be > triggered, it'll clear the state empty, and relist all the files in the > directory. > > {code:java} > // AbstractComponentNode.java > if (!value.equals(propertyModComparisonValue)) { > try { > onPropertyModified(descriptor, oldValue, value); > } catch (final Exception e) { > // nothing really to do here... > } > } > {code} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-6250) Add Kerberos authentication support to HTTP Processors
[ https://issues.apache.org/jira/browse/NIFI-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16830366#comment-16830366 ] Peter Wicks commented on NIFI-6250: --- Decided to not update GetHTTP and PostHTTP as they are deprecated. > Add Kerberos authentication support to HTTP Processors > -- > > Key: NIFI-6250 > URL: https://issues.apache.org/jira/browse/NIFI-6250 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > > InvokeHTTP should support authenticating using Kerberos. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6250) Add Kerberos authentication support to HTTP Processors
[ https://issues.apache.org/jira/browse/NIFI-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6250: -- Description: InvokeHTTP should support authenticating using Kerberos. (was: GetHTTP, PostHTTP and InvokeHTTP should support authenticating using Kerberos.) > Add Kerberos authentication support to HTTP Processors > -- > > Key: NIFI-6250 > URL: https://issues.apache.org/jira/browse/NIFI-6250 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > > InvokeHTTP should support authenticating using Kerberos. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6250) Add Kerberos authentication support to InvokeHTTP Processor
[ https://issues.apache.org/jira/browse/NIFI-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6250: -- Summary: Add Kerberos authentication support to InvokeHTTP Processor (was: Add Kerberos authentication support to HTTP Processors) > Add Kerberos authentication support to InvokeHTTP Processor > --- > > Key: NIFI-6250 > URL: https://issues.apache.org/jira/browse/NIFI-6250 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > > InvokeHTTP should support authenticating using Kerberos. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6250) Add Kerberos authentication support to HTTP Processors
Peter Wicks created NIFI-6250: - Summary: Add Kerberos authentication support to HTTP Processors Key: NIFI-6250 URL: https://issues.apache.org/jira/browse/NIFI-6250 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Affects Versions: 1.9.2 Reporter: Peter Wicks Assignee: Peter Wicks GetHTTP, PostHTTP and InvokeHTTP should support authenticating using Kerberos. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6175) Spark Livy - Add Support for All Missing Session Features
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6175: -- Description: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. When the controller service shuts down, sessions are left running, with no option to shut them down. Add in functionality to shutdown open sessions. When Livy starts up, it searches for existing sessions, but does not verify that those sessions belong to it. The Kerberos identity should be used to verify the identity on the session matches the identity on the controller service. was: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. Also, when the controller service shuts down, sessions are left running, with no option to shut them down. Add in functionality to shutdown open sessions. > Spark Livy - Add Support for All Missing Session Features > - > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.9.1 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > Would like to see all available options there, with a blanket "conf" option > for users to provide custom configuration. > When the controller service shuts down, sessions are left running, with no > option to shut them down. Add in functionality to shutdown open sessions. > When Livy starts up, it searches for existing sessions, but does not verify > that those sessions belong to it. The Kerberos identity should be used to > verify the identity on the session matches the identity on the controller > service. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6175) Spark Livy - Add Support for All Missing Session Features
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6175: -- Summary: Spark Livy - Add Support for All Missing Session Features (was: Spark Livy - Add Support for All Missing Session Startup Features) > Spark Livy - Add Support for All Missing Session Features > - > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.9.1 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > Would like to see all available options there, with a blanket "conf" option > for users to provide custom configuration. > Also, when the controller service shuts down, sessions are left running, with > no option to shut them down. Add in functionality to shutdown open sessions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6175) Spark Livy - Add Support for All Missing Session Startup Features
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6175: -- Description: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. Also, when the controller service shuts down, sessions are left running, with no option to shut them down. Add in functionality to shutdown open sessions. was: The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. > Spark Livy - Add Support for All Missing Session Startup Features > - > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.9.1 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > Would like to see all available options there, with a blanket "conf" option > for users to provide custom configuration. > Also, when the controller service shuts down, sessions are left running, with > no option to shut them down. Add in functionality to shutdown open sessions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-6049) MoveHDFS property improvement
[ https://issues.apache.org/jira/browse/NIFI-6049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-6049. --- Resolution: Fixed Fix Version/s: 1.10.0 PR Merged. > MoveHDFS property improvement > - > > Key: NIFI-6049 > URL: https://issues.apache.org/jira/browse/NIFI-6049 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Georgy >Assignee: Sivaprasanna Sethuraman >Priority: Major > Fix For: 1.10.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Change expression language level for output directory property (flow file > attributes and variable registry). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-6221) Improve navigational efficiency on List Queue dialog to View/Download content
[ https://issues.apache.org/jira/browse/NIFI-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-6221. --- Resolution: Fixed Fix Version/s: 1.10.0 > Improve navigational efficiency on List Queue dialog to View/Download content > - > > Key: NIFI-6221 > URL: https://issues.apache.org/jira/browse/NIFI-6221 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Affects Versions: 1.9.2 >Reporter: Alex A. >Assignee: Alex A. >Priority: Minor > Fix For: 1.10.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > Current navigation route for and end user to View/Download FlowFile content > from the List Queue dialog is cumbersome involving the user being required to > traverse the FlowFile Details dialog to gain access to buttons that can > perform these functions. Recommend implementing View/Download icon buttons > in the List Queue slick grid 'actions' column that will provide more direct > access for users to these functions and improve overall navigational > efficiency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6223) Expose Cluster Node Type to Controller Services
[ https://issues.apache.org/jira/browse/NIFI-6223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6223: -- Summary: Expose Cluster Node Type to Controller Services (was: Controller Services Support OnPrimaryNodeStateChange, but do not receive the initial node type information on init) > Expose Cluster Node Type to Controller Services > --- > > Key: NIFI-6223 > URL: https://issues.apache.org/jira/browse/NIFI-6223 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > > When a processor goes through it's init phase, it receives the state of the > current cluster node through `ProcessorInitializationContext`. Both > Processors and ControllerServices are supported by the > OnPrimaryNodeStateChange annotation. But Controller Services do not receive > the initial node state through init, so are only able to receive updates, and > have no initial information about cluster state. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6223) Controller Services Support OnPrimaryNodeStateChange, but do not receive the initial node type information on init
Peter Wicks created NIFI-6223: - Summary: Controller Services Support OnPrimaryNodeStateChange, but do not receive the initial node type information on init Key: NIFI-6223 URL: https://issues.apache.org/jira/browse/NIFI-6223 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Peter Wicks Assignee: Peter Wicks When a processor goes through it's init phase, it receives the state of the current cluster node through `ProcessorInitializationContext`. Both Processors and ControllerServices are supported by the OnPrimaryNodeStateChange annotation. But Controller Services do not receive the initial node state through init, so are only able to receive updates, and have no initial information about cluster state. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-6175) Spark Livy - Add Support for All Missing Session Startup Features
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-6175: -- Affects Version/s: 1.9.1 > Spark Livy - Add Support for All Missing Session Startup Features > - > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.9.1 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > Would like to see all available options there, with a blanket "conf" option > for users to provide custom configuration. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-6175) Spark Livy - Add Support for All Missing Session Startup Features
[ https://issues.apache.org/jira/browse/NIFI-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16808046#comment-16808046 ] Peter Wicks commented on NIFI-6175: --- Some crossover between NIFI-4946 and this ticket, but that ticket was partially implemented by other tickets, and the PR has stalled out. > Spark Livy - Add Support for All Missing Session Startup Features > - > > Key: NIFI-6175 > URL: https://issues.apache.org/jira/browse/NIFI-6175 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > > The Livy Session Controller is missing many of the options available, and > many of them I feel are critical for this service to be useful (queue? conf? > num of executors?) > Would like to see all available options there, with a blanket "conf" option > for users to provide custom configuration. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-6175) Spark Livy - Add Support for All Missing Session Startup Features
Peter Wicks created NIFI-6175: - Summary: Spark Livy - Add Support for All Missing Session Startup Features Key: NIFI-6175 URL: https://issues.apache.org/jira/browse/NIFI-6175 Project: Apache NiFi Issue Type: Improvement Reporter: Peter Wicks Assignee: Peter Wicks The Livy Session Controller is missing many of the options available, and many of them I feel are critical for this service to be useful (queue? conf? num of executors?) Would like to see all available options there, with a blanket "conf" option for users to provide custom configuration. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-5846) Redirect URL is incorrect after logout
[ https://issues.apache.org/jira/browse/NIFI-5846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-5846. --- Resolution: Fixed Fix Version/s: 1.9.0 Already merged, just forgot to manually close Jira ticket. > Redirect URL is incorrect after logout > -- > > Key: NIFI-5846 > URL: https://issues.apache.org/jira/browse/NIFI-5846 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.8.0 >Reporter: Kotaro Terada >Assignee: Kotaro Terada >Priority: Major > Fix For: 1.9.0 > > Attachments: login-incorrect-url.png > > > When we click the logout button on the Web UI, it currently redirects to > "/login" instead of "/nifi/login" after logging out, which causes the error > page shown in the attached screenshot. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5744) Put exception message to attribute while ExecuteSQL fail
[ https://issues.apache.org/jira/browse/NIFI-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-5744: -- Fix Version/s: 1.9.0 > Put exception message to attribute while ExecuteSQL fail > > > Key: NIFI-5744 > URL: https://issues.apache.org/jira/browse/NIFI-5744 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.7.1 >Reporter: Deon Huang >Assignee: Deon Huang >Priority: Minor > Fix For: 1.9.0 > > > In some scenario, it would be great if we could have different behavior based > on exception. > Better error tracking afterwards in attribute format instead of tracking in > log. > For example, if it’s connection refused exception due to wrong url. > We won’t want to retry and error message attribute would be helpful to keep > track of. > While it’s other scenario that database temporary unavailable, we should > retry it based on should retry exception. > Should be a quick fix at AbstractExecuteSQL before transfer flowfile to > failure relationship > {code:java} > session.transfer(fileToProcess, REL_FAILURE); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-5744) Put exception message to attribute while ExecuteSQL fail
[ https://issues.apache.org/jira/browse/NIFI-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-5744. --- Resolution: Fixed Was already merged, but forgot to manually close Jira ticket. > Put exception message to attribute while ExecuteSQL fail > > > Key: NIFI-5744 > URL: https://issues.apache.org/jira/browse/NIFI-5744 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.7.1 >Reporter: Deon Huang >Assignee: Deon Huang >Priority: Minor > Fix For: 1.9.0 > > > In some scenario, it would be great if we could have different behavior based > on exception. > Better error tracking afterwards in attribute format instead of tracking in > log. > For example, if it’s connection refused exception due to wrong url. > We won’t want to retry and error message attribute would be helpful to keep > track of. > While it’s other scenario that database temporary unavailable, we should > retry it based on should retry exception. > Should be a quick fix at AbstractExecuteSQL before transfer flowfile to > failure relationship > {code:java} > session.transfer(fileToProcess, REL_FAILURE); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5722) Expose Penalty Remaining Duration
[ https://issues.apache.org/jira/browse/NIFI-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-5722: -- Fix Version/s: 1.9.0 > Expose Penalty Remaining Duration > - > > Key: NIFI-5722 > URL: https://issues.apache.org/jira/browse/NIFI-5722 > Project: Apache NiFi > Issue Type: New Feature > Components: Core Framework, Core UI >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > Fix For: 1.9.0 > > Attachments: FlowFIleDetailsWithPenaltyTime.PNG, > QueueWithPenaltyTime.PNG > > Time Spent: 40m > Remaining Estimate: 0h > > When a FlowFile is penalized a user is only able to see that it is penalized > in List Queue and FlowFile details, but not for how much longer the FlowFile > will be penalized. > The List Queue details need to show remaining penalty duration, along with > the FlowFile details. This should replace the existing 'Yes' value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-5722) Expose Penalty Remaining Duration
[ https://issues.apache.org/jira/browse/NIFI-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-5722. --- Resolution: Fixed > Expose Penalty Remaining Duration > - > > Key: NIFI-5722 > URL: https://issues.apache.org/jira/browse/NIFI-5722 > Project: Apache NiFi > Issue Type: New Feature > Components: Core Framework, Core UI >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Minor > Attachments: FlowFIleDetailsWithPenaltyTime.PNG, > QueueWithPenaltyTime.PNG > > Time Spent: 40m > Remaining Estimate: 0h > > When a FlowFile is penalized a user is only able to see that it is penalized > in List Queue and FlowFile details, but not for how much longer the FlowFile > will be penalized. > The List Queue details need to show remaining penalty duration, along with > the FlowFile details. This should replace the existing 'Yes' value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5940) Cluster Node Offload Hangs if any RPG on flow is Disabled
[ https://issues.apache.org/jira/browse/NIFI-5940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-5940: -- Affects Version/s: 1.8.0 > Cluster Node Offload Hangs if any RPG on flow is Disabled > - > > Key: NIFI-5940 > URL: https://issues.apache.org/jira/browse/NIFI-5940 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.8.0 >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > > If any Remote Process Group on the flow is disabled when a user starts a node > Offload, then offload fails. > This is because the Offload process tries to turn off all Remote Process > Group's, even if they are already disabled. > 2019-01-09 17:22:00,823 ERROR [Offload Flow Files from Node] > org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Offload Flow > Files from Node,5,main]: java.lang.IllegalStateException: > 33a4935b-5800-360d-9250-2179e3ef5efe is not transmitting > 2019-01-09 17:22:00,823 ERROR [Offload Flow Files from Node] > org.apache.nifi.NiFi > java.lang.IllegalStateException: 33a4935b-5800-360d-9250-2179e3ef5efe is not > transmitting > at > org.apache.nifi.remote.StandardRemoteProcessGroup.verifyCanStopTransmitting(StandardRemoteProcessGroup.java:1333) > at > org.apache.nifi.remote.StandardRemoteProcessGroup.stopTransmitting(StandardRemoteProcessGroup.java:1036) > at java.util.ArrayList.forEach(ArrayList.java:1249) > at > org.apache.nifi.controller.StandardFlowService.offload(StandardFlowService.java:706) > at > org.apache.nifi.controller.StandardFlowService.handleOffloadRequest(StandardFlowService.java:688) > at > org.apache.nifi.controller.StandardFlowService.access$400(StandardFlowService.java:105) > at > org.apache.nifi.controller.StandardFlowService$3.run(StandardFlowService.java:428) > at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5940) Cluster Node Offload Hangs if any RPG on flow is Disabled
Peter Wicks created NIFI-5940: - Summary: Cluster Node Offload Hangs if any RPG on flow is Disabled Key: NIFI-5940 URL: https://issues.apache.org/jira/browse/NIFI-5940 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Peter Wicks Assignee: Peter Wicks If any Remote Process Group on the flow is disabled when a user starts a node Offload, then offload fails. This is because the Offload process tries to turn off all Remote Process Group's, even if they are already disabled. 2019-01-09 17:22:00,823 ERROR [Offload Flow Files from Node] org.apache.nifi.NiFi An Unknown Error Occurred in Thread Thread[Offload Flow Files from Node,5,main]: java.lang.IllegalStateException: 33a4935b-5800-360d-9250-2179e3ef5efe is not transmitting 2019-01-09 17:22:00,823 ERROR [Offload Flow Files from Node] org.apache.nifi.NiFi java.lang.IllegalStateException: 33a4935b-5800-360d-9250-2179e3ef5efe is not transmitting at org.apache.nifi.remote.StandardRemoteProcessGroup.verifyCanStopTransmitting(StandardRemoteProcessGroup.java:1333) at org.apache.nifi.remote.StandardRemoteProcessGroup.stopTransmitting(StandardRemoteProcessGroup.java:1036) at java.util.ArrayList.forEach(ArrayList.java:1249) at org.apache.nifi.controller.StandardFlowService.offload(StandardFlowService.java:706) at org.apache.nifi.controller.StandardFlowService.handleOffloadRequest(StandardFlowService.java:688) at org.apache.nifi.controller.StandardFlowService.access$400(StandardFlowService.java:105) at org.apache.nifi.controller.StandardFlowService$3.run(StandardFlowService.java:428) at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5913) Standardize Definition of UUID in Documentation
[ https://issues.apache.org/jira/browse/NIFI-5913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16726810#comment-16726810 ] Peter Wicks commented on NIFI-5913: --- I merged with the GitHub button on a pr yesterday, and just made sure to type in my sign off by hand. > Standardize Definition of UUID in Documentation > --- > > Key: NIFI-5913 > URL: https://issues.apache.org/jira/browse/NIFI-5913 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation & Website >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Trivial > Fix For: 1.9.0 > > Time Spent: 20m > Remaining Estimate: 0h > > UUID does not have a consistent definition in the documentation, and one > definition had a repeated word (unique). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-5871) MockProcessSession.putAllAttributes should ignore the UUID attribute
[ https://issues.apache.org/jira/browse/NIFI-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-5871. --- Resolution: Fixed > MockProcessSession.putAllAttributes should ignore the UUID attribute > > > Key: NIFI-5871 > URL: https://issues.apache.org/jira/browse/NIFI-5871 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.8.0 >Reporter: Alex Savitsky >Priority: Major > Time Spent: 50m > Remaining Estimate: 0h > > Currently, the method would copy all attributes indiscriminately, but the > interface Javadoc specifically states that the attribute "uuid" should be > ignored. This leads to issues with testing, where two distinct flow files are > considered same in the session. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5871) MockProcessSession.putAllAttributes should ignore the UUID attribute
[ https://issues.apache.org/jira/browse/NIFI-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks updated NIFI-5871: -- Fix Version/s: 1.9.0 > MockProcessSession.putAllAttributes should ignore the UUID attribute > > > Key: NIFI-5871 > URL: https://issues.apache.org/jira/browse/NIFI-5871 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.8.0 >Reporter: Alex Savitsky >Priority: Major > Fix For: 1.9.0 > > Time Spent: 50m > Remaining Estimate: 0h > > Currently, the method would copy all attributes indiscriminately, but the > interface Javadoc specifically states that the attribute "uuid" should be > ignored. This leads to issues with testing, where two distinct flow files are > considered same in the session. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5913) Standardize Definition of UUID in Documentation
Peter Wicks created NIFI-5913: - Summary: Standardize Definition of UUID in Documentation Key: NIFI-5913 URL: https://issues.apache.org/jira/browse/NIFI-5913 Project: Apache NiFi Issue Type: Improvement Components: Documentation & Website Reporter: Peter Wicks Assignee: Peter Wicks UUID does not have a consistent definition in the documentation, and one definition had a repeated word (unique). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5897) GetFile and FetchFile Min Age May Filter Out Files Unexpectedly
Peter Wicks created NIFI-5897: - Summary: GetFile and FetchFile Min Age May Filter Out Files Unexpectedly Key: NIFI-5897 URL: https://issues.apache.org/jira/browse/NIFI-5897 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Peter Wicks Assignee: Peter Wicks In GetFile and FetchFile processors, the Minimum File Age property is not optional, and defaults to 0 sec. This is intended to mean that no filter should be applied, I believe. If a file is uploaded with a timestamp that is in the future compared to NiFi's system timestamp (ex. NiFi is running in UTC, but has a filer mounted where files are uploaded in UTC+8), then the age of the file will be -8 hrs, and will not be loaded. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5639) PublishAMQP Processor
[ https://issues.apache.org/jira/browse/NIFI-5639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16710666#comment-16710666 ] Peter Wicks commented on NIFI-5639: --- [~kislayom] - Probably the most important thing is that you have a use for it or can easily test it. Personally, I have never used this processor before > PublishAMQP Processor > --- > > Key: NIFI-5639 > URL: https://issues.apache.org/jira/browse/NIFI-5639 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0, 1.6.0, 1.7.0, 1.7.1 >Reporter: Mohammed Nadeem >Priority: Critical > Original Estimate: 96h > Remaining Estimate: 96h > > *PublishAMQP* Processor is routing incoming flowfile to success relationship > though invalid properties are configured. The processor is not throwing any > errors when invalid properties are configured at run-time. When the message > is not published with invalid properties, the processor silently routes the > message to success when it is suppose to route to failure relationship > indicating message was not published to the queue. This is a bug and no > proper error handling when failures occur *(nifi-amqp-bundle)* -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4621) Allow inputs to ListSFTP and ListFTP
[ https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16710366#comment-16710366 ] Peter Wicks commented on NIFI-4621: --- Hi [~kislayom], Your PR does not look correct to me. Did you check-in all of the commit's, maybe something got lost? Right now I don't see most of the changes I expected to see (EL changes you mentioned before are missing, etc...) > Allow inputs to ListSFTP and ListFTP > > > Key: NIFI-4621 > URL: https://issues.apache.org/jira/browse/NIFI-4621 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.4.0 >Reporter: Soumya Shanta Ghosh >Assignee: Peter Wicks >Priority: Critical > Fix For: 1.9.0 > > > ListSFTP supports listing of the supplied directory (Remote Path) > out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" > / "Private Key Passphrase". > The password can change at a regular interval (depending on organization > policy) or the Hostname or the Remote Path can change based on some other > requirement. > This is a case to allow ListSFTP to leverage the use of Nifi Expression > language so that the values of Hostname, Password and/or Remote Path can be > set based on the attributes of an incoming flow file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5862) MockRecordParser Has Bad Logic for failAfterN
Peter Wicks created NIFI-5862: - Summary: MockRecordParser Has Bad Logic for failAfterN Key: NIFI-5862 URL: https://issues.apache.org/jira/browse/NIFI-5862 Project: Apache NiFi Issue Type: Bug Reporter: Peter Wicks Assignee: Peter Wicks `MockRecordParser` has a function that allows it to throw an exception after a certain number of records have been read. This feature is not working at all, and instead the reader fails immediately without reading any records. None of the test cases check for how many records were read, so you can only see this in the console, for example on `TestSplitRecord.testReadFailure`: As Is: `Intentional Unit Test Exception because 0 records have been read` As Should Be: `Intentional Unit Test Exception because 2 records have been read` -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-764) Add support for global variables
[ https://issues.apache.org/jira/browse/NIFI-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Wicks resolved NIFI-764. -- Resolution: Duplicate Already implemented with other tickets. > Add support for global variables > > > Key: NIFI-764 > URL: https://issues.apache.org/jira/browse/NIFI-764 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Core UI >Reporter: Matt Gilman >Priority: Minor > > Add support for global variables (possibly scoped at a group level). These > could be used within NiFi expression language providing support for more > portable templates and bulk configuration changes. > The general idea is that the user will be able to define a variable name and > value and then use it within expression language. At this point they would be > able to export a template with components referencing the variables. When > imported into another NiFi instance with the same variables defined, it > provides an easy way to move a flow between different environment and update > it's configuration in a single action (like when values for development and > production differ per instance). > Another benefit of an approach like this, it that it could allow for bunk > edits. If multiple components reference the same variable (e.g. > .post.uri), it allows the user to make that change in a single location > rather than having to identify each instance of that value. Will obviously > need to consider the mechanics when modifying a variable as it will require > all referencing components to be stopped. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4621) Allow inputs to ListSFTP
[ https://issues.apache.org/jira/browse/NIFI-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16703310#comment-16703310 ] Peter Wicks commented on NIFI-4621: --- [~kislayom] How's this going? > Allow inputs to ListSFTP > > > Key: NIFI-4621 > URL: https://issues.apache.org/jira/browse/NIFI-4621 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.4.0 >Reporter: Soumya Shanta Ghosh >Assignee: Kislay Kumar >Priority: Critical > > ListSFTP supports listing of the supplied directory (Remote Path) > out-of-the-box on the supplied "Hostname" using the 'Username" and 'Password" > / "Private Key Passphrase". > The password can change at a regular interval (depending on organization > policy) or the Hostname or the Remote Path can change based on some other > requirement. > This is a case to allow ListSFTP to leverage the use of Nifi Expression > language so that the values of Hostname, Password and/or Remote Path can be > set based on the attributes of an incoming flow file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5852) ExecuteSQL Pre/Post SQL Don't Allow Semicolons in String Literals
Peter Wicks created NIFI-5852: - Summary: ExecuteSQL Pre/Post SQL Don't Allow Semicolons in String Literals Key: NIFI-5852 URL: https://issues.apache.org/jira/browse/NIFI-5852 Project: Apache NiFi Issue Type: Improvement Reporter: Peter Wicks In NIFI-5780 pre/post SQL statements were added to ExecuteSQL. If the SQL statement contains a string constant, like `WHERE field='some;value'` then the code breaks because it splits on semicolons no matter where they appear. Some form of smarter string parsing would allow for semicolons to appear inside of strings. [~deonashh] -- This message was sent by Atlassian JIRA (v7.6.3#76005)