[jira] [Created] (NIFIREG-221) Retain groupId and Id identifiers when importing Flows from registry
eric twilegar created NIFIREG-221: - Summary: Retain groupId and Id identifiers when importing Flows from registry Key: NIFIREG-221 URL: https://issues.apache.org/jira/browse/NIFIREG-221 Project: NiFi Registry Issue Type: Improvement Reporter: eric twilegar We started using Nifi registry with Git integration to have some insight into the changes being made and create a relatively standard SDLC for our Nifi process groups ( read jobs ). When we started only one user was making changes and so the git history was pretty good. When a second user imported a flow to their local machine, made changes, committed those changes and we deployed them we took a look in Git to see the changes the user had made. The git diff was riddled with id and groupId process identifier changes which made the history less useful than it could have been. I'm not sure what your future plans are for Git so this might come out in the wash, but we are looking forward to being able to see the diffs, approve them and so on. I realize PRs could be a real issue as this stuff doesn't merge well, but eventually we might want a PR from a registry to registry so this could all tie together. Thanks so much for the great work. Have a wonderful day. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5914) JSONPathReader doesn't unescape Unicode characters
eric twilegar created NIFI-5914: --- Summary: JSONPathReader doesn't unescape Unicode characters Key: NIFI-5914 URL: https://issues.apache.org/jira/browse/NIFI-5914 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.7.1 Environment: Ubuntu 16 and Postgres 9 Reporter: eric twilegar I have a flow that use FlattenJSON that read JSON out of a file which converts it to new JSON in the flowfile. If UTF-8 characters are present the resulting json becomes something like \{"key" : "value /u00EE" } . At the end of the flow I use PutDBRecord with JSONPathReader to grab JSON keys and plop them in a db table. The problem is that the values in the db become the escaped unicode of the JSON. I'm working around it by extracting the JSON and running it through Nifi Expression Languages unescapeJSON ... I'm not sure if the real issue is that FlattenJSON has no option to keep encoding at UTF-8 or if its in JSONPathReader or in PutDBRecord even should be unescaping. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5608) PutDatabaseRecord will remove _s in update keys if translate columns is true
eric twilegar created NIFI-5608: --- Summary: PutDatabaseRecord will remove _s in update keys if translate columns is true Key: NIFI-5608 URL: https://issues.apache.org/jira/browse/NIFI-5608 Project: Apache NiFi Issue Type: Bug Reporter: eric twilegar I had a table where the columns names where all defined lower case. In the nifi records it was mixed case and sort of all over the place. translate was working well, but then i added a column with an "_" in it. So a column like my_id which was part of the primary key and so was used the as the WHERE clause in the update statement. So the where clause was "WHERE myid = 5" vs "WHERE my_id =5" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5607) JDBC connection processors hang after network disruptions.
[ https://issues.apache.org/jira/browse/NIFI-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] eric twilegar updated NIFI-5607: Description: I use a VPN connection to connect to our AWS VPC. openVPN can sometimes just drop the connection. If i'm developing a task that connects to postgres over VPN and my vpn connection drops any processors using the service hangs. The processors themselves won't stop nor the service. I don't see this issue on production since the networking is far more consistent. I'll take a look at the code later, as I'm sure this will be a hard ticket to reproduce for any developer. If I get some time maybe I'll see if I can reproduce it with mysql to eliminate postgres JDBC drivers as the culprit. I know there was some posts about the desire to "hard stop" processors and/or services. At the moment I have to restart the entire server. Thanks! was: I use a VPN connection to connect to our AWS VPC. openVPN can sometimes just drop the connection. If i'm developing a task that connects to postgres over VPN and my vpn connection drops any processors using the service hangs. The processors themselves won't stop nor the service. I don't see this issue on production since the networking is far more consistent. What is odd is that the database connection is not going through the VPN, but DNS is. The db connection is actually through an SSH tunnel. I'm using ip addresses though, so I'm not really sure why the networking would even be disrupted, but it sure does. Maybe its just a linux thing on the network stack with openvpn. Either way Nifi isn't handling some type of networking issue in there. I'll take a look at the code later, as I'm sure this will be a hard ticket to reproduce for any developer. If I get some time maybe I'll see if I can reproduce it with mysql to eliminate postgres JDBC drivers as the culprit. I know there was some posts about the desire to "hard stop" processors and/or services. At the moment I have to restart the entire server. Thanks! > JDBC connection processors hang after network disruptions. > -- > > Key: NIFI-5607 > URL: https://issues.apache.org/jira/browse/NIFI-5607 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.6.0 >Reporter: eric twilegar >Priority: Major > > I use a VPN connection to connect to our AWS VPC. openVPN can sometimes just > drop the connection. > If i'm developing a task that connects to postgres over VPN and my vpn > connection drops any processors using the service hangs. The processors > themselves won't stop nor the service. > I don't see this issue on production since the networking is far more > consistent. > I'll take a look at the code later, as I'm sure this will be a hard ticket to > reproduce for any developer. If I get some time maybe I'll see if I can > reproduce it with mysql to eliminate postgres JDBC drivers as the culprit. > I know there was some posts about the desire to "hard stop" processors and/or > services. At the moment I have to restart the entire server. > Thanks! > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5607) JDBC connection processors hang after network disruptions.
eric twilegar created NIFI-5607: --- Summary: JDBC connection processors hang after network disruptions. Key: NIFI-5607 URL: https://issues.apache.org/jira/browse/NIFI-5607 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.6.0 Reporter: eric twilegar I use a VPN connection to connect to our AWS VPC. openVPN can sometimes just drop the connection. If i'm developing a task that connects to postgres over VPN and my vpn connection drops any processors using the service hangs. The processors themselves won't stop nor the service. I don't see this issue on production since the networking is far more consistent. What is odd is that the database connection is not going through the VPN, but DNS is. The db connection is actually through an SSH tunnel. I'm using ip addresses though, so I'm not really sure why the networking would even be disrupted, but it sure does. Maybe its just a linux thing on the network stack with openvpn. Either way Nifi isn't handling some type of networking issue in there. I'll take a look at the code later, as I'm sure this will be a hard ticket to reproduce for any developer. If I get some time maybe I'll see if I can reproduce it with mysql to eliminate postgres JDBC drivers as the culprit. I know there was some posts about the desire to "hard stop" processors and/or services. At the moment I have to restart the entire server. Thanks! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5594) Allow TransformXML processor to have an attribute set to the XSLT
eric twilegar created NIFI-5594: --- Summary: Allow TransformXML processor to have an attribute set to the XSLT Key: NIFI-5594 URL: https://issues.apache.org/jira/browse/NIFI-5594 Project: Apache NiFi Issue Type: Improvement Reporter: eric twilegar When using nifi registry this prevent me from deploying everything self contained. I might create a separate job that writes the file someplace, but would be nice to not have to. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5270) my ftp password is "${password}" so nifi's LISTFtp won't use it.
eric twilegar created NIFI-5270: --- Summary: my ftp password is "${password}" so nifi's LISTFtp won't use it. Key: NIFI-5270 URL: https://issues.apache.org/jira/browse/NIFI-5270 Project: Apache NiFi Issue Type: Bug Reporter: eric twilegar I'm joking of course, but if that was your password the processor would fail as it would consider it an expression and not a password. In all seriousness though we really do something like "isPasswordExpression" checkbox for all controllers. This would also allow nifi registry to not consider them secrets and so you don't have to cut and paste ${ftp_password} after deploying a version. Maybe just adding passwordExpression vs sharing the property is a better idea. I didn't test whether you can escape the password in someway, so there is a chance this isn't a bug. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5178) Stackable content
eric twilegar created NIFI-5178: --- Summary: Stackable content Key: NIFI-5178 URL: https://issues.apache.org/jira/browse/NIFI-5178 Project: Apache NiFi Issue Type: Improvement Reporter: eric twilegar Having an issue where I need to make decision as I'm processing a list of records. Similar to an upsert/merge type of flow. I need to first check that a record hasn't already been imported or was already imported in some other mechanism prior to routing. To do this I add an ExecuteSQL to the flow. All I really need is the execute.sql.rowcount:equals(0) statement and the actual results of the executeSQL are useless to me. Simply trying to make a decision on how to branch with RouteOnAttribute. The work around now is to store the content of the original in an attribute, and then use ReplaceText to then plop it back on after ExecuteSQL processor. This can get quite cumbersome if you have 4 or 5 decision points. What would be nice is if all processors could instead of replacing content, could PUSH content. Then add a processor called "PopContent" which would just remove the last content that was pushed onto the flowfile. If content was a stack then you could go off and get some data, do a few stages with it, then add attributes, and then pop back to the original content. In my case ExecuteSQL wouldn't overwrite content, but instead just push new data onto the stack. Not sure if LookupService is a better mechanism for this going forward. It's possible I could do a lookup and instead of enriching the data add a boolean type key to be used as a decision point later. Such as "alreadyExistsInDatabase" : "true|false" instead of something like "Store_name" : "Greatest store on earth" that enrichment generally does. I'm sure SQLLookupService is coming. Adding "original" transfer for executesql might also solve this issue without a lot major refactoring in nifi. I may put in a ticket for adding original to executeSQL possibly looking at the code myself. Thanks for the great tool! -- This message was sent by Atlassian JIRA (v7.6.3#76005)