[jira] [Created] (NIFI-2780) Have NiFi automatically clean up the persistent provenance data after switching
David A. Wynne created NIFI-2780: Summary: Have NiFi automatically clean up the persistent provenance data after switching Key: NIFI-2780 URL: https://issues.apache.org/jira/browse/NIFI-2780 Project: Apache NiFi Issue Type: Wish Reporter: David A. Wynne Priority: Trivial Current behavior of NiFi requires the user to manually clean up provenance repository data left on disk after switching from persistent to volatile. This ticket is a request to have NiFi automatically clean up the persistent provenance data left on disk after switching. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-2778) If a Provenance Query is canceled, the repository doesn't stop immediately
Mark Payne created NIFI-2778: Summary: If a Provenance Query is canceled, the repository doesn't stop immediately Key: NIFI-2778 URL: https://issues.apache.org/jira/browse/NIFI-2778 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Mark Payne Fix For: 1.1.0 When a Provenance Query is issued and then canceled, the result object is marked as canceled, but repository continues to search. It should instead stop querying lucene and stop reading events from the provenance log files -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-2777) Provenance Events' Node Identifier not set when querying only 1 node in cluster
Mark Payne created NIFI-2777: Summary: Provenance Events' Node Identifier not set when querying only 1 node in cluster Key: NIFI-2777 URL: https://issues.apache.org/jira/browse/NIFI-2777 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Mark Payne Assignee: Mark Payne Fix For: 1.1.0 If I open the Provenance page and search for a FlowFile UUID and restrict the search to a specific node, the Node Identifier is not populated in the events that are returned. As a result, I cannot view the lineage. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (NIFI-2754) FlowFiles Queue into Swap Only
[ https://issues.apache.org/jira/browse/NIFI-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman resolved NIFI-2754. --- Resolution: Fixed Fix Version/s: 1.1.0 > FlowFiles Queue into Swap Only > -- > > Key: NIFI-2754 > URL: https://issues.apache.org/jira/browse/NIFI-2754 > Project: Apache NiFi > Issue Type: Bug >Reporter: Peter Wicks > Fix For: 1.1.0 > > > If the Active queue is empty and the number of FlowFiles added to the queue > is perfectly splitable by the current Swap size (10 Flow Files / 2 > files per swap file = 5 with no remainder), then no FlowFiles will move to > Active and all will remain in Swap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2754) FlowFiles Queue into Swap Only
[ https://issues.apache.org/jira/browse/NIFI-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491312#comment-15491312 ] ASF GitHub Bot commented on NIFI-2754: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/1000 Thanks @patricker! Following Mark's suggestion and your update, I've gone ahead and merged this into master. > FlowFiles Queue into Swap Only > -- > > Key: NIFI-2754 > URL: https://issues.apache.org/jira/browse/NIFI-2754 > Project: Apache NiFi > Issue Type: Bug >Reporter: Peter Wicks > > If the Active queue is empty and the number of FlowFiles added to the queue > is perfectly splitable by the current Swap size (10 Flow Files / 2 > files per swap file = 5 with no remainder), then no FlowFiles will move to > Active and all will remain in Swap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #1000: NIFI-2754
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/1000 Thanks @patricker! Following Mark's suggestion and your update, I've gone ahead and merged this into master. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request #1000: NIFI-2754
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1000 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2754) FlowFiles Queue into Swap Only
[ https://issues.apache.org/jira/browse/NIFI-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491308#comment-15491308 ] ASF subversion and git services commented on NIFI-2754: --- Commit 8a28395e9feafdb3af8c76137bfe0f5f7a07e27e in nifi's branch refs/heads/master from [~patricker] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=8a28395 ] NIFI-2754 - Migrating swap to active prior to swapping if necessary. - This closes #1000. > FlowFiles Queue into Swap Only > -- > > Key: NIFI-2754 > URL: https://issues.apache.org/jira/browse/NIFI-2754 > Project: Apache NiFi > Issue Type: Bug >Reporter: Peter Wicks > > If the Active queue is empty and the number of FlowFiles added to the queue > is perfectly splitable by the current Swap size (10 Flow Files / 2 > files per swap file = 5 with no remainder), then no FlowFiles will move to > Active and all will remain in Swap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2754) FlowFiles Queue into Swap Only
[ https://issues.apache.org/jira/browse/NIFI-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491309#comment-15491309 ] ASF GitHub Bot commented on NIFI-2754: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1000 > FlowFiles Queue into Swap Only > -- > > Key: NIFI-2754 > URL: https://issues.apache.org/jira/browse/NIFI-2754 > Project: Apache NiFi > Issue Type: Bug >Reporter: Peter Wicks > > If the Active queue is empty and the number of FlowFiles added to the queue > is perfectly splitable by the current Swap size (10 Flow Files / 2 > files per swap file = 5 with no remainder), then no FlowFiles will move to > Active and all will remain in Swap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2251) Restore lineage graph export
[ https://issues.apache.org/jira/browse/NIFI-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491207#comment-15491207 ] ASF GitHub Bot commented on NIFI-2251: -- Github user YolandaMDavis commented on the issue: https://github.com/apache/nifi/pull/982 Thanks @mcgilman! > Restore lineage graph export > > > Key: NIFI-2251 > URL: https://issues.apache.org/jira/browse/NIFI-2251 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Reporter: Matt Gilman >Assignee: Yolanda M. Davis > Fix For: 1.1.0 > > > Restore the lineage graph download/export using client side > methods/technologies to prevent unnecessary trips to the server. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #982: NIFI-2251 - Initial commit for client side provenance linea...
Github user YolandaMDavis commented on the issue: https://github.com/apache/nifi/pull/982 Thanks @mcgilman! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2776) When node connects back the to the cluster intermittently processor does not return to the same state as its in the cluster
[ https://issues.apache.org/jira/browse/NIFI-2776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491154#comment-15491154 ] Arpit Gupta commented on NIFI-2776: --- Around this time we see the following in the nifi-app.log {code} 2016-09-14 06:24:34,346 INFO [StandardProcessScheduler Thread-1] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled GetFile[id=275e48e3-0157-1000--0ba0cd25] to run with 1 threads 2016-09-14 06:24:34,559 INFO [StandardProcessScheduler Thread-2] org.elasticsearch.plugins [Titania] loaded [], sites [] 2016-09-14 06:24:34,714 INFO [Process Cluster Protocol Request-6] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 1858ae64-a654-41ae-bcc8-6e10dd5fdaad (type=DISCONNECTION_REQUEST, length=596 bytes) from host:port in 10 millis 2016-09-14 06:24:34,718 INFO [Disconnect from Cluster] o.a.nifi.controller.StandardFlowService Received disconnection request message from manager with explanation: User anonymous requested that node be disconnected from cluster 2016-09-14 06:24:34,718 INFO [Disconnect from Cluster] o.a.nifi.controller.StandardFlowService Disconnecting node. 2016-09-14 06:24:34,719 INFO [Disconnect from Cluster] o.apache.nifi.controller.FlowController Cluster State changed from Clustered to Not Clustered 2016-09-14 06:24:34,725 INFO [Disconnect from Cluster] o.a.n.c.l.e.CuratorLeaderElectionManager This node is no longer registered to be elected as the Leader for Role 'Primary Node' 2016-09-14 06:24:34,732 INFO [Disconnect from Cluster] o.a.n.c.l.e.CuratorLeaderElectionManager This node is no longer registered to be elected as the Leader for Role 'Cluster Coordinator' 2016-09-14 06:24:34,733 INFO [Disconnect from Cluster] o.a.nifi.controller.StandardFlowService Node disconnected. 2016-09-14 06:24:34,747 ERROR [Leader Election Notification Thread-2] o.a.c.f.recipes.leader.LeaderSelector The leader threw an exception java.lang.IllegalMonitorStateException: You do not own the lock: /nifi/leaders/Cluster Coordinator at org.apache.curator.framework.recipes.locks.InterProcessMutex.release(InterProcessMutex.java:140) ~[curator-recipes-2.11.0.jar:na] at org.apache.curator.framework.recipes.leader.LeaderSelector.doWork(LeaderSelector.java:425) [curator-recipes-2.11.0.jar:na] at org.apache.curator.framework.recipes.leader.LeaderSelector.doWorkLoop(LeaderSelector.java:441) [curator-recipes-2.11.0.jar:na] at org.apache.curator.framework.recipes.leader.LeaderSelector.access$100(LeaderSelector.java:64) [curator-recipes-2.11.0.jar:na] at org.apache.curator.framework.recipes.leader.LeaderSelector$2.call(LeaderSelector.java:245) [curator-recipes-2.11.0.jar:na] at org.apache.curator.framework.recipes.leader.LeaderSelector$2.call(LeaderSelector.java:239) [curator-recipes-2.11.0.jar:na] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_101] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_101] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_101] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_101] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] 2016-09-14 06:24:34,747 ERROR [Leader Election Notification Thread-1] o.a.c.f.recipes.leader.LeaderSelector The leader threw an exception java.lang.IllegalMonitorStateException: You do not own the lock: /nifi/leaders/Primary Node at org.apache.curator.framework.recipes.locks.InterProcessMutex.release(InterProcessMutex.java:140) ~[curator-recipes-2.11.0.jar:na] at org.apache.curator.framework.recipes.leader.LeaderSelector.doWork(LeaderSelector.java:425) [curator-recipes-2.11.0.jar:na] at org.apache.curator.framework.recipes.leader.LeaderSelector.doWorkLoop(LeaderSelector.java:441) [curator-recipes-2.11.0.jar:na] at org.apache.curator.framework.recipes.leader.LeaderSelector.access$100(LeaderSelector.java:64) [curator-recipes-2.11.0.jar:na] at org.apache.curator.framework.recipes.leader.LeaderSelector$2.call(LeaderSelector.java:245) [curator-recipes-2.11.0.jar:na] at org.apache.curator.framework.recipes.leader.LeaderSelector$2.call(LeaderSelector.java:239) [curator-recipes-2.11.0.jar:na] at
[jira] [Created] (NIFI-2776) When node connects back the to the cluster intermittently processor does not return to the same state as its in the cluster
Arpit Gupta created NIFI-2776: - Summary: When node connects back the to the cluster intermittently processor does not return to the same state as its in the cluster Key: NIFI-2776 URL: https://issues.apache.org/jira/browse/NIFI-2776 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.1.0 Reporter: Arpit Gupta Fix For: 1.1.0 Here is the scenario 1. Create a flow and start a processor 2. Disconnect a node 3. On the disconnected node stop the above processor 4. Connect the above node to the cluster 5. Wait 30s. 6. Check if the processor started on the node that was connected in #4. Very intermittently we see that the processor does not get into running state. When we query the processor status on the node we get the following bulletin {code} "bulletins": [{ "id": 0, "groupId": "275e45f8-0157-1000--f191c079", "sourceId": "275e4abc-0157-1000--5740dd0c", "timestamp": "06:24:35 UTC", "nodeAddress": "host:port", "canRead": true, "bulletin": { "id": 0, "nodeAddress": "host:port", "category": "Log Message", "groupId": "275e45f8-0157-1000--f191c079", "sourceId": "275e4abc-0157-1000--5740dd0c", "sourceName": "putES", "level": "WARNING", "message": "PutElasticsearch[id=275e4abc-0157-1000--5740dd0c] Can not start 'PutElasticsearch' since it's already in the process of being started or it is DISABLED - STOPPING", "timestamp": "06:24:35 UTC" } }], {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2251) Restore lineage graph export
[ https://issues.apache.org/jira/browse/NIFI-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-2251: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Restore lineage graph export > > > Key: NIFI-2251 > URL: https://issues.apache.org/jira/browse/NIFI-2251 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Reporter: Matt Gilman >Assignee: Yolanda M. Davis > Fix For: 1.1.0 > > > Restore the lineage graph download/export using client side > methods/technologies to prevent unnecessary trips to the server. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2251) Restore lineage graph export
[ https://issues.apache.org/jira/browse/NIFI-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491128#comment-15491128 ] ASF GitHub Bot commented on NIFI-2251: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/982 Thanks @YolandaMDavis! This looks good. I've verified the changes in all of our supported browsers. Just a heads up, I've made a couple minor changes to some spacing/formatting to be more consistent with existing code. This has been merged to master. > Restore lineage graph export > > > Key: NIFI-2251 > URL: https://issues.apache.org/jira/browse/NIFI-2251 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Reporter: Matt Gilman >Assignee: Yolanda M. Davis > Fix For: 1.1.0 > > > Restore the lineage graph download/export using client side > methods/technologies to prevent unnecessary trips to the server. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #982: NIFI-2251 - Initial commit for client side provenance linea...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/982 Thanks @YolandaMDavis! This looks good. I've verified the changes in all of our supported browsers. Just a heads up, I've made a couple minor changes to some spacing/formatting to be more consistent with existing code. This has been merged to master. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2251) Restore lineage graph export
[ https://issues.apache.org/jira/browse/NIFI-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491125#comment-15491125 ] ASF GitHub Bot commented on NIFI-2251: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/982 > Restore lineage graph export > > > Key: NIFI-2251 > URL: https://issues.apache.org/jira/browse/NIFI-2251 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Reporter: Matt Gilman >Assignee: Yolanda M. Davis > Fix For: 1.1.0 > > > Restore the lineage graph download/export using client side > methods/technologies to prevent unnecessary trips to the server. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #982: NIFI-2251 - Initial commit for client side provenanc...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/982 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2251) Restore lineage graph export
[ https://issues.apache.org/jira/browse/NIFI-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491122#comment-15491122 ] ASF subversion and git services commented on NIFI-2251: --- Commit 67a47dbead2ea4e06c637bc50c64fbdc2c66a546 in nifi's branch refs/heads/master from [~YolandaMDavis] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=67a47db ] NIFI-2251 - Initial commit for client side provenance lineage svg download. - css styling adjustments, changes for svg replace - Addressing some style/spacing. - This closes #982. > Restore lineage graph export > > > Key: NIFI-2251 > URL: https://issues.apache.org/jira/browse/NIFI-2251 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Reporter: Matt Gilman >Assignee: Yolanda M. Davis > Fix For: 1.1.0 > > > Restore the lineage graph download/export using client side > methods/technologies to prevent unnecessary trips to the server. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (NIFI-2775) UI - Go To button in Provenance table is not visible in Firefox
Matt Gilman created NIFI-2775: - Summary: UI - Go To button in Provenance table is not visible in Firefox Key: NIFI-2775 URL: https://issues.apache.org/jira/browse/NIFI-2775 Project: Apache NiFi Issue Type: Bug Components: Core UI Reporter: Matt Gilman Fix For: 1.1.0 In the Provenance table, the Go To button appears to be wrapping to the next line making the button not visible. Should also verify other buttons in other tables. This is only happening in Firefox. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2765) PutHiveStreaming does not work with Kerberos
[ https://issues.apache.org/jira/browse/NIFI-2765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-2765: --- Fix Version/s: 1.1.0 > PutHiveStreaming does not work with Kerberos > > > Key: NIFI-2765 > URL: https://issues.apache.org/jira/browse/NIFI-2765 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0 >Reporter: Matt Burgess >Assignee: Matt Burgess > Fix For: 1.1.0 > > > The PutHiveStreaming processor will complain about a missing > nifi.kerberos.krb5.file setting if using Kerberos. > It is the same symptom described in NIFI-2598, which fixed the issue for > HiveConnectionPool (and thus SelectHiveQL and PutHiveQL), but > PutHiveStreaming does not use the HiveConnectionPool, yet has similar code > which is statically adding Kerberos properties. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-1342) PostHTTP User Agent property should be pre-populated with client default
[ https://issues.apache.org/jira/browse/NIFI-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-1342: - Status: Patch Available (was: Open) > PostHTTP User Agent property should be pre-populated with client default > > > Key: NIFI-1342 > URL: https://issues.apache.org/jira/browse/NIFI-1342 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.4.1 >Reporter: Aldrin Piri >Priority: Trivial > > Currently, PostHTTP shows an empty string for the User Agent property which > is used in web requests, but this actually results in the default of the > client being used. For clarity, and if the backing library supports it, > getting the User Agent string used by the library as the default processor > User Agent property would be a nice improvement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1342) PostHTTP User Agent property should be pre-populated with client default
[ https://issues.apache.org/jira/browse/NIFI-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490793#comment-15490793 ] ASF GitHub Bot commented on NIFI-1342: -- GitHub user pvillard31 opened a pull request: https://github.com/apache/nifi/pull/1021 NIFI-1342 Added default User-Agent in PostHttp You can merge this pull request into a Git repository by running: $ git pull https://github.com/pvillard31/nifi nifi1342 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1021.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1021 commit ab155a39d9d4587bdf68eb3ce1e0c886bb8f925f Author: Pierre VillardDate: 2016-09-14T15:59:17Z NIFI-1342 Added default User-Agent in PostHttp > PostHTTP User Agent property should be pre-populated with client default > > > Key: NIFI-1342 > URL: https://issues.apache.org/jira/browse/NIFI-1342 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 0.4.1 >Reporter: Aldrin Piri >Priority: Trivial > > Currently, PostHTTP shows an empty string for the User Agent property which > is used in web requests, but this actually results in the default of the > client being used. For clarity, and if the backing library supports it, > getting the User Agent string used by the library as the default processor > User Agent property would be a nice improvement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (NIFI-2774) ConsumeJMS processor losses messages on NiFi restart
[ https://issues.apache.org/jira/browse/NIFI-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490702#comment-15490702 ] Christopher McDermott edited comment on NIFI-2774 at 9/14/16 3:59 PM: -- Digging into this a little further, I looks like just adding the client acknowledgment mode is not enough. The code needs to explicitly acknowledge the message after its been added to the flow if CLIENT_ACKNOWLEDGE is being used. As far as I can tell it does not do that today. Given that, I am not sure that exposing the ACK mode is even a good idea. Its probably better to just always use CLIENT_ACKNOWLEDGE, rather than having a "please expose me to data-loss" setting. was (Author: ch...@mcdermott.net): Digging into this a little further, I looks like just adding the client acknowledgment mode is not enough. The code needs to explicitly acknowledge the message after its been added to the flow if CLIENT_ACKNOWLEDGE is being used. As far as I can tell it does not do that today. Given that I am not sure that exposing the ACK mode is event a good idea. Its probably better to just always use CLIENT_ACKNOWLEDGE, rather than having a "please expose me to data-loss" setting. > ConsumeJMS processor losses messages on NiFi restart > > > Key: NIFI-2774 > URL: https://issues.apache.org/jira/browse/NIFI-2774 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0, 1.1.0, 0.8.0 >Reporter: Christopher McDermott >Assignee: Oleg Zhurakousky >Priority: Critical > Fix For: 1.1.0, 0.8.0 > > > ConsumeJMS processor uses auto-acknowledge mode. Unlike the deprecated > GetJMSQueue processor it does not provide a way to specify a different ACK > mode (i.e. client-acknowledge.) Using auto-acknowledge, acknowledges message > receipt from JMS *before* the messages are actually added to the flow. This > leads to data-loss on NiFi stop (or crash.) > I believe the fix for this is to allow the user to specify the ACK mode in > the processor configuration like is allowed by the GetJMSQueue processor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1021: NIFI-1342 Added default User-Agent in PostHttp
GitHub user pvillard31 opened a pull request: https://github.com/apache/nifi/pull/1021 NIFI-1342 Added default User-Agent in PostHttp You can merge this pull request into a Git repository by running: $ git pull https://github.com/pvillard31/nifi nifi1342 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1021.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1021 commit ab155a39d9d4587bdf68eb3ce1e0c886bb8f925f Author: Pierre VillardDate: 2016-09-14T15:59:17Z NIFI-1342 Added default User-Agent in PostHttp --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Assigned] (NIFI-2774) ConsumeJMS processor losses messages on NiFi restart
[ https://issues.apache.org/jira/browse/NIFI-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleg Zhurakousky reassigned NIFI-2774: -- Assignee: Oleg Zhurakousky > ConsumeJMS processor losses messages on NiFi restart > > > Key: NIFI-2774 > URL: https://issues.apache.org/jira/browse/NIFI-2774 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0, 1.1.0, 0.8.0 >Reporter: Christopher McDermott >Assignee: Oleg Zhurakousky >Priority: Critical > Fix For: 1.1.0, 0.8.0 > > > ConsumeJMS processor uses auto-acknowledge mode. Unlike the deprecated > GetJMSQueue processor it does not provide a way to specify a different ACK > mode (i.e. client-acknowledge.) Using auto-acknowledge, acknowledges message > receipt from JMS *before* the messages are actually added to the flow. This > leads to data-loss on NiFi stop (or crash.) > I believe the fix for this is to allow the user to specify the ACK mode in > the processor configuration like is allowed by the GetJMSQueue processor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (NIFI-2774) ConsumeJMS processor losses messages on NiFi restart
[ https://issues.apache.org/jira/browse/NIFI-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490776#comment-15490776 ] Oleg Zhurakousky edited comment on NIFI-2774 at 9/14/16 3:54 PM: - [~ch...@mcdermott.net] it will actually be handled with local TX where there will be explicit commits and rollbacks. More reliable and easier to follow. The fix is already in place, just adding tests to validate the behavior. I will also add more details to this JIRA once completed. Thank you for pointing this out! was (Author: ozhurakousky): [~ch...@mcdermott.net] it will actually be handled with local TX where there will be explicit commits and rollbacks. More reliable and easier to follow. The fix is already in place, just adding tests to validate the behavior Thank you for pointing this out! > ConsumeJMS processor losses messages on NiFi restart > > > Key: NIFI-2774 > URL: https://issues.apache.org/jira/browse/NIFI-2774 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0, 1.1.0, 0.8.0 >Reporter: Christopher McDermott >Priority: Critical > Fix For: 1.1.0, 0.8.0 > > > ConsumeJMS processor uses auto-acknowledge mode. Unlike the deprecated > GetJMSQueue processor it does not provide a way to specify a different ACK > mode (i.e. client-acknowledge.) Using auto-acknowledge, acknowledges message > receipt from JMS *before* the messages are actually added to the flow. This > leads to data-loss on NiFi stop (or crash.) > I believe the fix for this is to allow the user to specify the ACK mode in > the processor configuration like is allowed by the GetJMSQueue processor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2774) ConsumeJMS processor losses messages on NiFi restart
[ https://issues.apache.org/jira/browse/NIFI-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490776#comment-15490776 ] Oleg Zhurakousky commented on NIFI-2774: [~ch...@mcdermott.net] it will actually be handled with local TX where there will be explicit commits and rollbacks. More reliable and easier to follow. The fix is already in place, just adding tests to validate the behavior Thank you for pointing this out! > ConsumeJMS processor losses messages on NiFi restart > > > Key: NIFI-2774 > URL: https://issues.apache.org/jira/browse/NIFI-2774 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0, 1.1.0, 0.8.0 >Reporter: Christopher McDermott >Priority: Critical > Fix For: 1.1.0, 0.8.0 > > > ConsumeJMS processor uses auto-acknowledge mode. Unlike the deprecated > GetJMSQueue processor it does not provide a way to specify a different ACK > mode (i.e. client-acknowledge.) Using auto-acknowledge, acknowledges message > receipt from JMS *before* the messages are actually added to the flow. This > leads to data-loss on NiFi stop (or crash.) > I believe the fix for this is to allow the user to specify the ACK mode in > the processor configuration like is allowed by the GetJMSQueue processor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2774) ConsumeJMS processor losses messages on NiFi restart
[ https://issues.apache.org/jira/browse/NIFI-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490702#comment-15490702 ] Christopher McDermott commented on NIFI-2774: - Digging into this a little further, I looks like just adding the client acknowledgment mode is not enough. The code needs to explicitly acknowledge the message after its been added to the flow if CLIENT_ACKNOWLEDGE is being used. As far as I can tell it does not do that today. Given that I am not sure that exposing the ACK mode is event a good idea. Its probably better to just always use CLIENT_ACKNOWLEDGE, rather than having a "please expose me to data-loss" setting. > ConsumeJMS processor losses messages on NiFi restart > > > Key: NIFI-2774 > URL: https://issues.apache.org/jira/browse/NIFI-2774 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0, 1.1.0, 0.8.0 >Reporter: Christopher McDermott >Priority: Critical > Fix For: 1.1.0, 0.8.0 > > > ConsumeJMS processor uses auto-acknowledge mode. Unlike the deprecated > GetJMSQueue processor it does not provide a way to specify a different ACK > mode (i.e. client-acknowledge.) Using auto-acknowledge, acknowledges message > receipt from JMS *before* the messages are actually added to the flow. This > leads to data-loss on NiFi stop (or crash.) > I believe the fix for this is to allow the user to specify the ACK mode in > the processor configuration like is allowed by the GetJMSQueue processor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1170) TailFile "File to Tail" property should support Wildcards
[ https://issues.apache.org/jira/browse/NIFI-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490689#comment-15490689 ] ASF GitHub Bot commented on NIFI-1170: -- Github user trixpan commented on a diff in the pull request: https://github.com/apache/nifi/pull/980#discussion_r78770355 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java --- @@ -117,31 +173,78 @@ .allowableValues(LOCATION_LOCAL, LOCATION_REMOTE) .defaultValue(LOCATION_LOCAL.getValue()) .build(); + static final PropertyDescriptor START_POSITION = new PropertyDescriptor.Builder() .name("Initial Start Position") -.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from the file, " +.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from a file, " + "the Processor will continue from the last point from which it has received data.") .allowableValues(START_BEGINNING_OF_TIME, START_CURRENT_FILE, START_CURRENT_TIME) .defaultValue(START_CURRENT_FILE.getValue()) .required(true) .build(); +static final PropertyDescriptor RECURSIVE = new PropertyDescriptor.Builder() +.name("tailfile-recursive-lookup") +.displayName("Recursive lookup") +.description("When using Multiple files mode, this property defines if files must be listed recursively or not" ++ " in the base directory.") +.allowableValues("true", "false") +.defaultValue("true") +.required(true) +.build(); + +static final PropertyDescriptor ROLLING_STRATEGY = new PropertyDescriptor.Builder() +.name("tailfile-rolling-strategy") +.displayName("Rolling Strategy") +.description("Specifies if the files to tail have a fixed name or not.") +.required(true) +.allowableValues(FIXED_NAME, CHANGING_NAME) +.defaultValue(FIXED_NAME.getValue()) +.build(); + +static final PropertyDescriptor LOOKUP_FREQUENCY = new PropertyDescriptor.Builder() +.name("tailfile-lookup-frequency") +.displayName("Lookup frequency") +.description("Only used in Multiple files mode and Changing name rolling strategy, it specifies the minimum " ++ "duration the processor will wait before listing again the files to tail.") +.required(false) +.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR) +.defaultValue("10 minutes") --- End diff -- @joewitt I think @pvillard31 has a point when he says the status of a file should ALWAYS be tracked unless: 1. overwritten / reset by the user (causing data duplication). 2. too old to be relevant (removed automatically) Under this arrangement, the two timers make sense: 1 - Maximum age of file - if file is older than this date it won't be tailed. (_this happens to be very similar to Heka's approach as well_) 2 - How frequently to harvest for new files - self explanatory 2b - if new file is found tail. If file is pre-existent and is older than max age remove status; In addition, we could consider what flume-ng taildir called an idle timeout, `idleTimeout - Time (ms) to close inactive files. If the closed file is appended new lines to, this source will automatically re-open it.` These are files that are younger than maximum age, but largely stagnated. We would keep their status (until expiry) but they would be closed and only re-opened if the file size increased (or other tail conditions were be triggered). flume-ng tried to deal with resource waste by using and increasing delay to poll the idle files. The higher the number of polls without new data, the longer it would take before a new retry. Not sure if this is something we would like to do but would also help. Inevitably, most teams using date based naming conventions do that to prevent performing truncation of a file when logrotate runs and I suspect we should simply let the user know that having too many files in the same folder, matching the same URL would cause impact to performance and that compressing them so they don't match the file regex, or moving them to other directories in order to minimise resource waste.
[GitHub] nifi pull request #980: NIFI-1170 - Improved TailFile processor to support m...
Github user trixpan commented on a diff in the pull request: https://github.com/apache/nifi/pull/980#discussion_r78770355 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java --- @@ -117,31 +173,78 @@ .allowableValues(LOCATION_LOCAL, LOCATION_REMOTE) .defaultValue(LOCATION_LOCAL.getValue()) .build(); + static final PropertyDescriptor START_POSITION = new PropertyDescriptor.Builder() .name("Initial Start Position") -.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from the file, " +.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from a file, " + "the Processor will continue from the last point from which it has received data.") .allowableValues(START_BEGINNING_OF_TIME, START_CURRENT_FILE, START_CURRENT_TIME) .defaultValue(START_CURRENT_FILE.getValue()) .required(true) .build(); +static final PropertyDescriptor RECURSIVE = new PropertyDescriptor.Builder() +.name("tailfile-recursive-lookup") +.displayName("Recursive lookup") +.description("When using Multiple files mode, this property defines if files must be listed recursively or not" ++ " in the base directory.") +.allowableValues("true", "false") +.defaultValue("true") +.required(true) +.build(); + +static final PropertyDescriptor ROLLING_STRATEGY = new PropertyDescriptor.Builder() +.name("tailfile-rolling-strategy") +.displayName("Rolling Strategy") +.description("Specifies if the files to tail have a fixed name or not.") +.required(true) +.allowableValues(FIXED_NAME, CHANGING_NAME) +.defaultValue(FIXED_NAME.getValue()) +.build(); + +static final PropertyDescriptor LOOKUP_FREQUENCY = new PropertyDescriptor.Builder() +.name("tailfile-lookup-frequency") +.displayName("Lookup frequency") +.description("Only used in Multiple files mode and Changing name rolling strategy, it specifies the minimum " ++ "duration the processor will wait before listing again the files to tail.") +.required(false) +.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR) +.defaultValue("10 minutes") --- End diff -- @joewitt I think @pvillard31 has a point when he says the status of a file should ALWAYS be tracked unless: 1. overwritten / reset by the user (causing data duplication). 2. too old to be relevant (removed automatically) Under this arrangement, the two timers make sense: 1 - Maximum age of file - if file is older than this date it won't be tailed. (_this happens to be very similar to Heka's approach as well_) 2 - How frequently to harvest for new files - self explanatory 2b - if new file is found tail. If file is pre-existent and is older than max age remove status; In addition, we could consider what flume-ng taildir called an idle timeout, `idleTimeout - Time (ms) to close inactive files. If the closed file is appended new lines to, this source will automatically re-open it.` These are files that are younger than maximum age, but largely stagnated. We would keep their status (until expiry) but they would be closed and only re-opened if the file size increased (or other tail conditions were be triggered). flume-ng tried to deal with resource waste by using and increasing delay to poll the idle files. The higher the number of polls without new data, the longer it would take before a new retry. Not sure if this is something we would like to do but would also help. Inevitably, most teams using date based naming conventions do that to prevent performing truncation of a file when logrotate runs and I suspect we should simply let the user know that having too many files in the same folder, matching the same URL would cause impact to performance and that compressing them so they don't match the file regex, or moving them to other directories in order to minimise resource waste. Hope this makes sense --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is
[jira] [Comment Edited] (NIFI-2774) ConsumeJMS processor losses messages on NiFi restart
[ https://issues.apache.org/jira/browse/NIFI-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490669#comment-15490669 ] Christopher McDermott edited comment on NIFI-2774 at 9/14/16 3:14 PM: -- [~joewitt], this means 0.x will not be able to provide lossless operation. Some us are fixed to the 0.x line for some time. Yes, the GetJMSQueue processor on 0.x provides the lossless ACK functionality but that processor has several other bugs which makes it unusable. Those bugs were not fixed because the reasoning went that GetJMSQueue was being deprecated in favor of ConsumeJMS. Since the 0.x code is very close to the 1.x code in this area it should be little extra work to make pull the 1.x fix onto the 0.x branch. was (Author: ch...@mcdermott.net): [~joewitt], this means 0.x will not be able to provide lossless operation. Some us are fixed to the 0.x line for some time. Yes, the GetJMSQueue processor on 0.x provides the lossless ACK functionality that that processor has several other bugs which makes it unusable. Those bugs were not fixed because the reasoning went that GetJMSQueue was being deprecated in favor of ConsumeJMS. Since the 0.x code is very close to the 1.x code in this area it should be little extra work to make pull the 1.x fix onto the 0.x branch. > ConsumeJMS processor losses messages on NiFi restart > > > Key: NIFI-2774 > URL: https://issues.apache.org/jira/browse/NIFI-2774 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0, 1.1.0, 0.8.0 >Reporter: Christopher McDermott >Priority: Critical > Fix For: 1.1.0, 0.8.0 > > > ConsumeJMS processor uses auto-acknowledge mode. Unlike the deprecated > GetJMSQueue processor it does not provide a way to specify a different ACK > mode (i.e. client-acknowledge.) Using auto-acknowledge, acknowledges message > receipt from JMS *before* the messages are actually added to the flow. This > leads to data-loss on NiFi stop (or crash.) > I believe the fix for this is to allow the user to specify the ACK mode in > the processor configuration like is allowed by the GetJMSQueue processor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (NIFI-2774) ConsumeJMS processor losses messages on NiFi restart
[ https://issues.apache.org/jira/browse/NIFI-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490669#comment-15490669 ] Christopher McDermott edited comment on NIFI-2774 at 9/14/16 3:14 PM: -- [~joewitt], this means 0.x will not be able to provide lossless operation. Some us are fixed to the 0.x line for some time. Yes, the GetJMSQueue processor on 0.x provides the lossless ACK functionality that that processor has several other bugs which makes it unusable. Those bugs were not fixed because the reasoning went that GetJMSQueue was being deprecated in favor of ConsumeJMS. Since the 0.x code is very close to the 1.x code in this area it should be little extra work to make pull the 1.x fix onto the 0.x branch. was (Author: ch...@mcdermott.net): Joe, this means 0.x will not be able to provide lossless operation. Some us are fixed to the 0.x line for some time. Yes, the GetJMSQueue processor on 0.x provides the lossless ACK functionality that that processor has several other bugs which makes it unusable. Those bugs were not fixed because the reasoning went that GetJMSQueue was being deprecated in favor of ConsumeJMS. Since the 0.x code is very close to the 1.x code in this area it should be little extra work to make pull the 1.x fix onto the 0.x branch. > ConsumeJMS processor losses messages on NiFi restart > > > Key: NIFI-2774 > URL: https://issues.apache.org/jira/browse/NIFI-2774 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0, 1.1.0, 0.8.0 >Reporter: Christopher McDermott >Priority: Critical > Fix For: 1.1.0, 0.8.0 > > > ConsumeJMS processor uses auto-acknowledge mode. Unlike the deprecated > GetJMSQueue processor it does not provide a way to specify a different ACK > mode (i.e. client-acknowledge.) Using auto-acknowledge, acknowledges message > receipt from JMS *before* the messages are actually added to the flow. This > leads to data-loss on NiFi stop (or crash.) > I believe the fix for this is to allow the user to specify the ACK mode in > the processor configuration like is allowed by the GetJMSQueue processor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2774) ConsumeJMS processor losses messages on NiFi restart
[ https://issues.apache.org/jira/browse/NIFI-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490669#comment-15490669 ] Christopher McDermott commented on NIFI-2774: - Joe, this means 0.x will not be able to provide lossless operation. Some us are fixed to the 0.x line for some time. Yes, the GetJMSQueue processor on 0.x provides the lossless ACK functionality that that processor has several other bugs which makes it unusable. Those bugs were not fixed because the reasoning went that GetJMSQueue was being deprecated in favor of ConsumeJMS. Since the 0.x code is very close to the 1.x code in this area it should be little extra work to make pull the 1.x fix onto the 0.x branch. > ConsumeJMS processor losses messages on NiFi restart > > > Key: NIFI-2774 > URL: https://issues.apache.org/jira/browse/NIFI-2774 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0, 1.1.0, 0.8.0 >Reporter: Christopher McDermott >Priority: Critical > Fix For: 1.1.0, 0.8.0 > > > ConsumeJMS processor uses auto-acknowledge mode. Unlike the deprecated > GetJMSQueue processor it does not provide a way to specify a different ACK > mode (i.e. client-acknowledge.) Using auto-acknowledge, acknowledges message > receipt from JMS *before* the messages are actually added to the flow. This > leads to data-loss on NiFi stop (or crash.) > I believe the fix for this is to allow the user to specify the ACK mode in > the processor configuration like is allowed by the GetJMSQueue processor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2771) REST API does not compress responses
[ https://issues.apache.org/jira/browse/NIFI-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-2771: -- Status: Patch Available (was: In Progress) > REST API does not compress responses > > > Key: NIFI-2771 > URL: https://issues.apache.org/jira/browse/NIFI-2771 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Mark Payne >Assignee: Matt Gilman >Priority: Critical > Fix For: 1.1.0 > > > Responses from the REST API do not appear to be compressed. In the logs, we > see warnings: > 2016-09-13 15:22:23,124 WARN [main] o.eclipse.jetty.util.DeprecationWarning > Using @Deprecated Class org.eclipse.jetty.servlets.GzipFilter > 2016-09-13 15:22:23,124 WARN [main] org.eclipse.jetty.servlets.GzipFilter > GzipFilter is deprecated. Use GzipHandler -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #1019: NIFI-2772: Unsecure RAW Site-to-Site fails with User DN is...
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/1019 +1 verified site-to-site functionality for secure and un-secure, with raw and http, will merge to master --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-1170) TailFile "File to Tail" property should support Wildcards
[ https://issues.apache.org/jira/browse/NIFI-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490598#comment-15490598 ] ASF GitHub Bot commented on NIFI-1170: -- Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/980#discussion_r78762052 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java --- @@ -117,31 +173,78 @@ .allowableValues(LOCATION_LOCAL, LOCATION_REMOTE) .defaultValue(LOCATION_LOCAL.getValue()) .build(); + static final PropertyDescriptor START_POSITION = new PropertyDescriptor.Builder() .name("Initial Start Position") -.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from the file, " +.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from a file, " + "the Processor will continue from the last point from which it has received data.") .allowableValues(START_BEGINNING_OF_TIME, START_CURRENT_FILE, START_CURRENT_TIME) .defaultValue(START_CURRENT_FILE.getValue()) .required(true) .build(); +static final PropertyDescriptor RECURSIVE = new PropertyDescriptor.Builder() +.name("tailfile-recursive-lookup") +.displayName("Recursive lookup") +.description("When using Multiple files mode, this property defines if files must be listed recursively or not" ++ " in the base directory.") +.allowableValues("true", "false") +.defaultValue("true") +.required(true) +.build(); + +static final PropertyDescriptor ROLLING_STRATEGY = new PropertyDescriptor.Builder() +.name("tailfile-rolling-strategy") +.displayName("Rolling Strategy") +.description("Specifies if the files to tail have a fixed name or not.") +.required(true) +.allowableValues(FIXED_NAME, CHANGING_NAME) +.defaultValue(FIXED_NAME.getValue()) +.build(); + +static final PropertyDescriptor LOOKUP_FREQUENCY = new PropertyDescriptor.Builder() +.name("tailfile-lookup-frequency") +.displayName("Lookup frequency") +.description("Only used in Multiple files mode and Changing name rolling strategy, it specifies the minimum " ++ "duration the processor will wait before listing again the files to tail.") +.required(false) +.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR) +.defaultValue("10 minutes") --- End diff -- There appear to be two concerns here. 1) How often to look for new (not currently watched/tailed files) 2) At what point to consider a file fully consumed and no longer needing to be actively watched/tailed. There should be a property for each concern then. For (1) a rather low value on the order of seconds to minutes as a default sounds reasonable. For (2) a higher default value on the order of minutes to hours sounds reasonable. In either case, the description of the property should clearly call out what it means and the impact of the settings being too low or too high for a given situation so users can decide whether they should specify an alternative for their case or not. In no case should either of these be 'infinite' and we must ensure we limit how many things we track at once as it becomes a resource concern. If this is already accounted for then great. > TailFile "File to Tail" property should support Wildcards > - > > Key: NIFI-1170 > URL: https://issues.apache.org/jira/browse/NIFI-1170 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 0.4.0 >Reporter: Andre > > Because of challenges around log rotation of high volume syslog and app > producers, it is customary to logging platform developers to promote file > variables based file names such as DynaFiles (rsyslog), Macros(syslog-ng)as > alternatives to getting SIGHUPs being sent to the syslog daemon upon every > file rotation. > (To certain extent, used even NiFi's has similar patterns, like for example, > when one uses Expression Language to set PutHDFS destination file). > The current TailFile strategy
[GitHub] nifi pull request #980: NIFI-1170 - Improved TailFile processor to support m...
Github user joewitt commented on a diff in the pull request: https://github.com/apache/nifi/pull/980#discussion_r78762052 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java --- @@ -117,31 +173,78 @@ .allowableValues(LOCATION_LOCAL, LOCATION_REMOTE) .defaultValue(LOCATION_LOCAL.getValue()) .build(); + static final PropertyDescriptor START_POSITION = new PropertyDescriptor.Builder() .name("Initial Start Position") -.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from the file, " +.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from a file, " + "the Processor will continue from the last point from which it has received data.") .allowableValues(START_BEGINNING_OF_TIME, START_CURRENT_FILE, START_CURRENT_TIME) .defaultValue(START_CURRENT_FILE.getValue()) .required(true) .build(); +static final PropertyDescriptor RECURSIVE = new PropertyDescriptor.Builder() +.name("tailfile-recursive-lookup") +.displayName("Recursive lookup") +.description("When using Multiple files mode, this property defines if files must be listed recursively or not" ++ " in the base directory.") +.allowableValues("true", "false") +.defaultValue("true") +.required(true) +.build(); + +static final PropertyDescriptor ROLLING_STRATEGY = new PropertyDescriptor.Builder() +.name("tailfile-rolling-strategy") +.displayName("Rolling Strategy") +.description("Specifies if the files to tail have a fixed name or not.") +.required(true) +.allowableValues(FIXED_NAME, CHANGING_NAME) +.defaultValue(FIXED_NAME.getValue()) +.build(); + +static final PropertyDescriptor LOOKUP_FREQUENCY = new PropertyDescriptor.Builder() +.name("tailfile-lookup-frequency") +.displayName("Lookup frequency") +.description("Only used in Multiple files mode and Changing name rolling strategy, it specifies the minimum " ++ "duration the processor will wait before listing again the files to tail.") +.required(false) +.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR) +.defaultValue("10 minutes") --- End diff -- There appear to be two concerns here. 1) How often to look for new (not currently watched/tailed files) 2) At what point to consider a file fully consumed and no longer needing to be actively watched/tailed. There should be a property for each concern then. For (1) a rather low value on the order of seconds to minutes as a default sounds reasonable. For (2) a higher default value on the order of minutes to hours sounds reasonable. In either case, the description of the property should clearly call out what it means and the impact of the settings being too low or too high for a given situation so users can decide whether they should specify an alternative for their case or not. In no case should either of these be 'infinite' and we must ensure we limit how many things we track at once as it becomes a resource concern. If this is already accounted for then great. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2772) Unsecure RAW Site-to-Site fails with User DN is not known
[ https://issues.apache.org/jira/browse/NIFI-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490596#comment-15490596 ] ASF GitHub Bot commented on NIFI-2772: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1019 > Unsecure RAW Site-to-Site fails with User DN is not known > - > > Key: NIFI-2772 > URL: https://issues.apache.org/jira/browse/NIFI-2772 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.1.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Blocker > Fix For: 1.1.0 > > > If Site-to-Site is configured unsecure (nifi.remote.input.secure=false), then > Site-to-Site client using RAW transport protocol fails with following error: > {code} > [main] ERROR org.apache.nifi.remote.client.socket.EndpointConnectionPool - > EndpointConnectionPool[Cluster URL=http://localhost:9444/nifi] failed to > communicate with Peer[url=nifi://localhost:10444,CLOSED] due to > org.apache.nifi.remote.exception.HandshakeException: Received unexpected > response User Not Authorized: User DN is not known > [main] ERROR org.apache.nifi.remote.client.socket.EndpointConnectionPool - > org.apache.nifi.remote.exception.HandshakeException: Received unexpected > response User Not Authorized: User DN is not known > at > org.apache.nifi.remote.protocol.socket.SocketClientProtocol.handshake(SocketClientProtocol.java:179) > at > org.apache.nifi.remote.protocol.socket.SocketClientProtocol.handshake(SocketClientProtocol.java:105) > at > org.apache.nifi.remote.client.socket.EndpointConnectionPool.getEndpointConnection(EndpointConnectionPool.java:240) > at > org.apache.nifi.remote.client.socket.SocketClient.createTransaction(SocketClient.java:127) > {code} > This is a regression caused by NIFI-2718. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (NIFI-2772) Unsecure RAW Site-to-Site fails with User DN is not known
[ https://issues.apache.org/jira/browse/NIFI-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-2772: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Unsecure RAW Site-to-Site fails with User DN is not known > - > > Key: NIFI-2772 > URL: https://issues.apache.org/jira/browse/NIFI-2772 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.1.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Blocker > Fix For: 1.1.0 > > > If Site-to-Site is configured unsecure (nifi.remote.input.secure=false), then > Site-to-Site client using RAW transport protocol fails with following error: > {code} > [main] ERROR org.apache.nifi.remote.client.socket.EndpointConnectionPool - > EndpointConnectionPool[Cluster URL=http://localhost:9444/nifi] failed to > communicate with Peer[url=nifi://localhost:10444,CLOSED] due to > org.apache.nifi.remote.exception.HandshakeException: Received unexpected > response User Not Authorized: User DN is not known > [main] ERROR org.apache.nifi.remote.client.socket.EndpointConnectionPool - > org.apache.nifi.remote.exception.HandshakeException: Received unexpected > response User Not Authorized: User DN is not known > at > org.apache.nifi.remote.protocol.socket.SocketClientProtocol.handshake(SocketClientProtocol.java:179) > at > org.apache.nifi.remote.protocol.socket.SocketClientProtocol.handshake(SocketClientProtocol.java:105) > at > org.apache.nifi.remote.client.socket.EndpointConnectionPool.getEndpointConnection(EndpointConnectionPool.java:240) > at > org.apache.nifi.remote.client.socket.SocketClient.createTransaction(SocketClient.java:127) > {code} > This is a regression caused by NIFI-2718. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2772) Unsecure RAW Site-to-Site fails with User DN is not known
[ https://issues.apache.org/jira/browse/NIFI-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490594#comment-15490594 ] ASF subversion and git services commented on NIFI-2772: --- Commit bc005e3398c2a73b8149d85fd3598dd4b5616b11 in nifi's branch refs/heads/master from [~ijokarumawak] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=bc005e3 ] NIFI-2772: Unsecure RAW Site-to-Site fails with User DN is not known This closes #1019. Signed-off-by: Bryan Bende> Unsecure RAW Site-to-Site fails with User DN is not known > - > > Key: NIFI-2772 > URL: https://issues.apache.org/jira/browse/NIFI-2772 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.1.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Blocker > Fix For: 1.1.0 > > > If Site-to-Site is configured unsecure (nifi.remote.input.secure=false), then > Site-to-Site client using RAW transport protocol fails with following error: > {code} > [main] ERROR org.apache.nifi.remote.client.socket.EndpointConnectionPool - > EndpointConnectionPool[Cluster URL=http://localhost:9444/nifi] failed to > communicate with Peer[url=nifi://localhost:10444,CLOSED] due to > org.apache.nifi.remote.exception.HandshakeException: Received unexpected > response User Not Authorized: User DN is not known > [main] ERROR org.apache.nifi.remote.client.socket.EndpointConnectionPool - > org.apache.nifi.remote.exception.HandshakeException: Received unexpected > response User Not Authorized: User DN is not known > at > org.apache.nifi.remote.protocol.socket.SocketClientProtocol.handshake(SocketClientProtocol.java:179) > at > org.apache.nifi.remote.protocol.socket.SocketClientProtocol.handshake(SocketClientProtocol.java:105) > at > org.apache.nifi.remote.client.socket.EndpointConnectionPool.getEndpointConnection(EndpointConnectionPool.java:240) > at > org.apache.nifi.remote.client.socket.SocketClient.createTransaction(SocketClient.java:127) > {code} > This is a regression caused by NIFI-2718. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2772) Unsecure RAW Site-to-Site fails with User DN is not known
[ https://issues.apache.org/jira/browse/NIFI-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490590#comment-15490590 ] ASF GitHub Bot commented on NIFI-2772: -- Github user bbende commented on the issue: https://github.com/apache/nifi/pull/1019 +1 verified site-to-site functionality for secure and un-secure, with raw and http, will merge to master > Unsecure RAW Site-to-Site fails with User DN is not known > - > > Key: NIFI-2772 > URL: https://issues.apache.org/jira/browse/NIFI-2772 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.1.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Blocker > Fix For: 1.1.0 > > > If Site-to-Site is configured unsecure (nifi.remote.input.secure=false), then > Site-to-Site client using RAW transport protocol fails with following error: > {code} > [main] ERROR org.apache.nifi.remote.client.socket.EndpointConnectionPool - > EndpointConnectionPool[Cluster URL=http://localhost:9444/nifi] failed to > communicate with Peer[url=nifi://localhost:10444,CLOSED] due to > org.apache.nifi.remote.exception.HandshakeException: Received unexpected > response User Not Authorized: User DN is not known > [main] ERROR org.apache.nifi.remote.client.socket.EndpointConnectionPool - > org.apache.nifi.remote.exception.HandshakeException: Received unexpected > response User Not Authorized: User DN is not known > at > org.apache.nifi.remote.protocol.socket.SocketClientProtocol.handshake(SocketClientProtocol.java:179) > at > org.apache.nifi.remote.protocol.socket.SocketClientProtocol.handshake(SocketClientProtocol.java:105) > at > org.apache.nifi.remote.client.socket.EndpointConnectionPool.getEndpointConnection(EndpointConnectionPool.java:240) > at > org.apache.nifi.remote.client.socket.SocketClient.createTransaction(SocketClient.java:127) > {code} > This is a regression caused by NIFI-2718. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #1011: Handle UI race condition
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/1011 +1 tested this out by simulating a small delay during refresh and tested adding/removing out-of-order scenarios and all appear to work now, will merge to master --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2774) ConsumeJMS processor losses messages on NiFi restart
[ https://issues.apache.org/jira/browse/NIFI-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490569#comment-15490569 ] Joseph Witt commented on NIFI-2774: --- on 0.x line the docs should be updated to explain the behavior of the processor. on 1.x the capability to control the ack modes should be provided with the default setting being 'safe' as is reasonable to expect for any nifi processor. > ConsumeJMS processor losses messages on NiFi restart > > > Key: NIFI-2774 > URL: https://issues.apache.org/jira/browse/NIFI-2774 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0, 0.7.0, 1.1.0, 0.8.0 >Reporter: Christopher McDermott >Priority: Critical > Fix For: 1.1.0, 0.8.0 > > > ConsumeJMS processor uses auto-acknowledge mode. Unlike the deprecated > GetJMSQueue processor it does not provide a way to specify a different ACK > mode (i.e. client-acknowledge.) Using auto-acknowledge, acknowledges message > receipt from JMS *before* the messages are actually added to the flow. This > leads to data-loss on NiFi stop (or crash.) > I believe the fix for this is to allow the user to specify the ACK mode in > the processor configuration like is allowed by the GetJMSQueue processor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2719) UI - Request race condition
[ https://issues.apache.org/jira/browse/NIFI-2719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490577#comment-15490577 ] ASF GitHub Bot commented on NIFI-2719: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1011 > UI - Request race condition > --- > > Key: NIFI-2719 > URL: https://issues.apache.org/jira/browse/NIFI-2719 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman > Fix For: 1.1.0 > > > There exists a race condition where during a request to get the components in > the current group another request to create or delete a component may execute. > This results in the component being incorrectly added/removed from the canvas > temporarily. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2719) UI - Request race condition
[ https://issues.apache.org/jira/browse/NIFI-2719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490576#comment-15490576 ] ASF subversion and git services commented on NIFI-2719: --- Commit 36846e0fe77b06a34a3fcf99af78c64ab7ecf16e in nifi's branch refs/heads/master from [~mcgilman] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=36846e0 ] NIFI-2719: - Caching components recently added/removed in case ajax requests are received out of order. This is not an issue for modifications of existing components as we're able to leverage the revision. This closes #1011. Signed-off-by: Bryan Bende> UI - Request race condition > --- > > Key: NIFI-2719 > URL: https://issues.apache.org/jira/browse/NIFI-2719 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman > Fix For: 1.1.0 > > > There exists a race condition where during a request to get the components in > the current group another request to create or delete a component may execute. > This results in the component being incorrectly added/removed from the canvas > temporarily. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1011: Handle UI race condition
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/1011 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (NIFI-2774) ConsumeJMS processor losses messages on NiFi restart
Christopher McDermott created NIFI-2774: --- Summary: ConsumeJMS processor losses messages on NiFi restart Key: NIFI-2774 URL: https://issues.apache.org/jira/browse/NIFI-2774 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 0.7.0, 1.0.0, 1.1.0, 0.8.0 Reporter: Christopher McDermott Priority: Critical Fix For: 1.1.0, 0.8.0 ConsumeJMS processor uses auto-acknowledge mode. Unlike the deprecated GetJMSQueue processor it does not provide a way to specify a different ACK mode (i.e. client-acknowledge.) Using auto-acknowledge, acknowledges message receipt from JMS *before* the messages are actually added to the flow. This leads to data-loss on NiFi stop (or crash.) I believe the fix for this is to allow the user to specify the ACK mode in the processor configuration like is allowed by the GetJMSQueue processor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2771) REST API does not compress responses
[ https://issues.apache.org/jira/browse/NIFI-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490539#comment-15490539 ] ASF GitHub Bot commented on NIFI-2771: -- GitHub user mcgilman opened a pull request: https://github.com/apache/nifi/pull/1020 Ensuring responses from the REST API are compressed NIFI-2771: - Using GzipHandler instead of GzipFilter. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mcgilman/nifi NIFI-2771 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1020.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1020 commit b8dfdbcb17b8403961f4ecdbbc70ae2bb34b4cf5 Author: Matt GilmanDate: 2016-09-14T14:13:16Z NIFI-2771: - Using GzipHandler instead of GzipFilter. > REST API does not compress responses > > > Key: NIFI-2771 > URL: https://issues.apache.org/jira/browse/NIFI-2771 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.0.0 >Reporter: Mark Payne >Assignee: Matt Gilman >Priority: Critical > Fix For: 1.1.0 > > > Responses from the REST API do not appear to be compressed. In the logs, we > see warnings: > 2016-09-13 15:22:23,124 WARN [main] o.eclipse.jetty.util.DeprecationWarning > Using @Deprecated Class org.eclipse.jetty.servlets.GzipFilter > 2016-09-13 15:22:23,124 WARN [main] org.eclipse.jetty.servlets.GzipFilter > GzipFilter is deprecated. Use GzipHandler -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi pull request #1020: Ensuring responses from the REST API are compressed
GitHub user mcgilman opened a pull request: https://github.com/apache/nifi/pull/1020 Ensuring responses from the REST API are compressed NIFI-2771: - Using GzipHandler instead of GzipFilter. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mcgilman/nifi NIFI-2771 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/1020.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1020 commit b8dfdbcb17b8403961f4ecdbbc70ae2bb34b4cf5 Author: Matt GilmanDate: 2016-09-14T14:13:16Z NIFI-2771: - Using GzipHandler instead of GzipFilter. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (NIFI-2773) Allow search results to be kept open
Mark Payne created NIFI-2773: Summary: Allow search results to be kept open Key: NIFI-2773 URL: https://issues.apache.org/jira/browse/NIFI-2773 Project: Apache NiFi Issue Type: Improvement Components: Core UI Reporter: Mark Payne Fix For: 1.1.0 I wanted to make a change to each instance of the PublishKafka processors on my canvas. I have 5 instances. To do this, I had to search for PublishKafka, select the first result, change it, search for PublishKafka, select the second result, change it, and so on. This is time consuming and gets much more difficult if there are more search results. We should allow the user to 'pin' the search results or something of that nature so that the results do not go away when one is selected. Instead, they should go away only after I choose to close them. This way, I could search for PublishKafka, update the first one, then just click the next result and update it, click the next result, and so on. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-1170) TailFile "File to Tail" property should support Wildcards
[ https://issues.apache.org/jira/browse/NIFI-1170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490494#comment-15490494 ] ASF GitHub Bot commented on NIFI-1170: -- Github user olegz commented on a diff in the pull request: https://github.com/apache/nifi/pull/980#discussion_r78752139 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java --- @@ -117,31 +173,78 @@ .allowableValues(LOCATION_LOCAL, LOCATION_REMOTE) .defaultValue(LOCATION_LOCAL.getValue()) .build(); + static final PropertyDescriptor START_POSITION = new PropertyDescriptor.Builder() .name("Initial Start Position") -.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from the file, " +.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from a file, " + "the Processor will continue from the last point from which it has received data.") .allowableValues(START_BEGINNING_OF_TIME, START_CURRENT_FILE, START_CURRENT_TIME) .defaultValue(START_CURRENT_FILE.getValue()) .required(true) .build(); +static final PropertyDescriptor RECURSIVE = new PropertyDescriptor.Builder() +.name("tailfile-recursive-lookup") +.displayName("Recursive lookup") +.description("When using Multiple files mode, this property defines if files must be listed recursively or not" ++ " in the base directory.") +.allowableValues("true", "false") +.defaultValue("true") +.required(true) +.build(); + +static final PropertyDescriptor ROLLING_STRATEGY = new PropertyDescriptor.Builder() +.name("tailfile-rolling-strategy") +.displayName("Rolling Strategy") +.description("Specifies if the files to tail have a fixed name or not.") +.required(true) +.allowableValues(FIXED_NAME, CHANGING_NAME) +.defaultValue(FIXED_NAME.getValue()) +.build(); + +static final PropertyDescriptor LOOKUP_FREQUENCY = new PropertyDescriptor.Builder() +.name("tailfile-lookup-frequency") +.displayName("Lookup frequency") +.description("Only used in Multiple files mode and Changing name rolling strategy, it specifies the minimum " ++ "duration the processor will wait before listing again the files to tail.") +.required(false) +.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR) +.defaultValue("10 minutes") --- End diff -- @joewitt that is what we are trying to determine "finite value you think is reasonable" and based on Pierre's explanation 10 minutes seems to be anything but. . . @pvillard31 that makes more sense now, so given that in my "practical" experience a typical rollovers are 24hrs, do you think setting the default value to 1hr or 24hr would be more appropriate? > TailFile "File to Tail" property should support Wildcards > - > > Key: NIFI-1170 > URL: https://issues.apache.org/jira/browse/NIFI-1170 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 0.4.0 >Reporter: Andre > > Because of challenges around log rotation of high volume syslog and app > producers, it is customary to logging platform developers to promote file > variables based file names such as DynaFiles (rsyslog), Macros(syslog-ng)as > alternatives to getting SIGHUPs being sent to the syslog daemon upon every > file rotation. > (To certain extent, used even NiFi's has similar patterns, like for example, > when one uses Expression Language to set PutHDFS destination file). > The current TailFile strategy suggests rotation patterns like: > {code} > log_folder/app.log > log_folder/app.log.1 > log_folder/app.log.2 > log_folder/app.log.3 > {code} > It is possible to fool the system to accept wildcards by simply using a > strategy like: > {code} > log_folder/test1 > log_folder/server1 > log_folder/server2 > log_folder/server3 > {code} > And configure *Rolling Filename Pattern* to * but it feels like a hack, > rather than catering for an ever increasingly prevalent use case > (DynaFile/macros/etc). > It would be great if instead, TailFile had the ability
[GitHub] nifi pull request #980: NIFI-1170 - Improved TailFile processor to support m...
Github user olegz commented on a diff in the pull request: https://github.com/apache/nifi/pull/980#discussion_r78752139 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/TailFile.java --- @@ -117,31 +173,78 @@ .allowableValues(LOCATION_LOCAL, LOCATION_REMOTE) .defaultValue(LOCATION_LOCAL.getValue()) .build(); + static final PropertyDescriptor START_POSITION = new PropertyDescriptor.Builder() .name("Initial Start Position") -.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from the file, " +.description("When the Processor first begins to tail data, this property specifies where the Processor should begin reading data. Once data has been ingested from a file, " + "the Processor will continue from the last point from which it has received data.") .allowableValues(START_BEGINNING_OF_TIME, START_CURRENT_FILE, START_CURRENT_TIME) .defaultValue(START_CURRENT_FILE.getValue()) .required(true) .build(); +static final PropertyDescriptor RECURSIVE = new PropertyDescriptor.Builder() +.name("tailfile-recursive-lookup") +.displayName("Recursive lookup") +.description("When using Multiple files mode, this property defines if files must be listed recursively or not" ++ " in the base directory.") +.allowableValues("true", "false") +.defaultValue("true") +.required(true) +.build(); + +static final PropertyDescriptor ROLLING_STRATEGY = new PropertyDescriptor.Builder() +.name("tailfile-rolling-strategy") +.displayName("Rolling Strategy") +.description("Specifies if the files to tail have a fixed name or not.") +.required(true) +.allowableValues(FIXED_NAME, CHANGING_NAME) +.defaultValue(FIXED_NAME.getValue()) +.build(); + +static final PropertyDescriptor LOOKUP_FREQUENCY = new PropertyDescriptor.Builder() +.name("tailfile-lookup-frequency") +.displayName("Lookup frequency") +.description("Only used in Multiple files mode and Changing name rolling strategy, it specifies the minimum " ++ "duration the processor will wait before listing again the files to tail.") +.required(false) +.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR) +.defaultValue("10 minutes") --- End diff -- @joewitt that is what we are trying to determine "finite value you think is reasonable" and based on Pierre's explanation 10 minutes seems to be anything but. . . @pvillard31 that makes more sense now, so given that in my "practical" experience a typical rollovers are 24hrs, do you think setting the default value to 1hr or 24hr would be more appropriate? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (NIFI-2753) HttpServletRequest / HTTP Context Map
[ https://issues.apache.org/jira/browse/NIFI-2753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490358#comment-15490358 ] Matt Gilman commented on NIFI-2753: --- [~ruckc] In 1.0.0 we've added support for multi-tenant authorization. This is implemented by enabling fine-grain component level access policies. Because Controller Services potentially cross these boundaries we've enabled Process Group level scoping. Without this scoping, you may have many components all with different access policies referencing the same services. Establishing appropriate policies for the service may not be possible. This scoping is what you're seeing in the UI. Controller Services being referenced by components in your data flow must be defined through the Process Group configuration dialog. This is accessed through the context menu on a Process Group or through the Operate palette on the left-hand side of the canvas. A service defined within a Process Group will be available to all descendant components. Controller Services being referenced by Reporting Tasks must be defined through the global Controller Settings. This is accessed through the global menu on the right-hand side of the top bar. > HttpServletRequest / HTTP Context Map > - > > Key: NIFI-2753 > URL: https://issues.apache.org/jira/browse/NIFI-2753 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.0.0 > Environment: Chrome 55, Nifi 1.0.0, Java 1.8.0_102, Windows 10 >Reporter: Curtis Ruck >Priority: Minor > > When trying to create a HandleHttpRequest -> HandleHttpResponse I am unable > to create a new HTTP Context Map object and use it to start the > HandleHttpRequest or HandleHttpResponse processors. When creating a new HTTP > Context Map within the HandleHttpRequest editor, it doesn't show in the > Controller Services pane to enable, and when creating the > StandardHttpContextMap in the Controller Services pane, it is unable to be > selected in the HandleHttpRequest and HandleHttpResponse editor. This > renders these processors useless, since they can't be started without a HTTP > Context Map that is enabled for them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (NIFI-1334) minor documentation issues
[ https://issues.apache.org/jira/browse/NIFI-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-1334. -- Resolution: Fixed Fix Version/s: 0.5.0 Closing the JIRA. It seems to be OK since a while. > minor documentation issues > --- > > Key: NIFI-1334 > URL: https://issues.apache.org/jira/browse/NIFI-1334 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation & Website >Affects Versions: 0.4.1 >Reporter: Lee Laim >Priority: Trivial > Labels: documentation > Fix For: 0.5.0 > > Original Estimate: 1h > Remaining Estimate: 1h > > I noticed a few subtle typos/inconsistencies in the expression-language > guide.They're minor but easily overlooked. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (NIFI-2266) GetHTTP and PutHTTP use hard-coded TLS protocol version
[ https://issues.apache.org/jira/browse/NIFI-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489953#comment-15489953 ] ASF GitHub Bot commented on NIFI-2266: -- Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/999 @alopresto your last changes are fine to me but it's still not working on my side. If I run the unit test individually it works fine, but when running maven build or the whole tests suite of the class in Eclipse, it is not working: Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.868 sec <<< FAILURE! - in org.apache.nifi.processors.standard.TestGetHTTPGroovy testGetHTTPShouldConnectToServerWithTLSv1(org.apache.nifi.processors.standard.TestGetHTTPGroovy) Time elapsed: 0.019 sec <<< FAILURE! java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.nifi.util.StandardProcessorTestRunner.assertQueueEmpty(StandardProcessorTestRunner.java:348) at org.apache.nifi.util.TestRunner$assertQueueEmpty$7.call(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117) at org.apache.nifi.processors.standard.TestGetHTTPGroovy$_testGetHTTPShouldConnectToServerWithTLSv1_closure7.doCall(TestGetHTTPGroovy.groovy:327) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019) at groovy.lang.Closure.call(Closure.java:426) at groovy.lang.Closure.call(Closure.java:442) at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2030) at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2015) at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2056) at org.codehaus.groovy.runtime.dgm$162.invoke(Unknown Source) at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:274) at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) at org.apache.nifi.processors.standard.TestGetHTTPGroovy.testGetHTTPShouldConnectToServerWithTLSv1(TestGetHTTPGroovy.groovy:324) > GetHTTP and PutHTTP use hard-coded TLS protocol version > --- > > Key: NIFI-2266 > URL: https://issues.apache.org/jira/browse/NIFI-2266 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 0.7.0, 0.6.1 >Reporter: Andy LoPresto >Assignee: Andy LoPresto > Labels: https, security, tls > Original Estimate: 1h > Remaining Estimate: 1h > > As pointed out on the mailing list [1], the {{GetHTTP}} (and likely > {{PutHTTP}}) processors use a hard-coded TLS protocol version. {{PostHTTP}} > also did this and was fixed by [NIFI-1688]. > The same fix should apply here and unit tests already exist which can be > applied to the other processors as well. > For future notice, {{InvokeHTTP}} is a better processor for generic HTTP > operations and has supported reading the TLS protocol version from the > {{SSLContextService}} for some time. > [1] > https://lists.apache.org/thread.html/a48e2ebbc2231d685491ae6b856c760620efca5bff2c7249f915b24d@%3Cdev.nifi.apache.org%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] nifi issue #999: NIFI-2266 Enabled TLSv1.1 and TLSv1.2 protocols for GetHTTP...
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/999 @alopresto your last changes are fine to me but it's still not working on my side. If I run the unit test individually it works fine, but when running maven build or the whole tests suite of the class in Eclipse, it is not working: Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.868 sec <<< FAILURE! - in org.apache.nifi.processors.standard.TestGetHTTPGroovy testGetHTTPShouldConnectToServerWithTLSv1(org.apache.nifi.processors.standard.TestGetHTTPGroovy) Time elapsed: 0.019 sec <<< FAILURE! java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.nifi.util.StandardProcessorTestRunner.assertQueueEmpty(StandardProcessorTestRunner.java:348) at org.apache.nifi.util.TestRunner$assertQueueEmpty$7.call(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117) at org.apache.nifi.processors.standard.TestGetHTTPGroovy$_testGetHTTPShouldConnectToServerWithTLSv1_closure7.doCall(TestGetHTTPGroovy.groovy:327) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019) at groovy.lang.Closure.call(Closure.java:426) at groovy.lang.Closure.call(Closure.java:442) at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2030) at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2015) at org.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2056) at org.codehaus.groovy.runtime.dgm$162.invoke(Unknown Source) at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:274) at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) at org.apache.nifi.processors.standard.TestGetHTTPGroovy.testGetHTTPShouldConnectToServerWithTLSv1(TestGetHTTPGroovy.groovy:324) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---