[jira] [Commented] (NIFI-4637) Add support for HBase visibility labels to HBase processors and controller services
[ https://issues.apache.org/jira/browse/NIFI-4637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451684#comment-16451684 ] ASF GitHub Bot commented on NIFI-4637: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2518 @MikeThomsen Thanks for rebasing. I am now testing against my HBase instance. > Add support for HBase visibility labels to HBase processors and controller > services > --- > > Key: NIFI-4637 > URL: https://issues.apache.org/jira/browse/NIFI-4637 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > > HBase supports visibility labels, but you can't use them from NiFi because > there is no way to set them. The existing processors and services should be > upgraded to handle this capability. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2518: NIFI-4637 Added support for visibility labels to the HBase...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2518 @MikeThomsen Thanks for rebasing. I am now testing against my HBase instance. ---
[jira] [Created] (NIFIREG-169) NiFi FDS Release process
Scott Aslan created NIFIREG-169: --- Summary: NiFi FDS Release process Key: NIFIREG-169 URL: https://issues.apache.org/jira/browse/NIFIREG-169 Project: NiFi Registry Issue Type: Improvement Reporter: Scott Aslan As a developer I want to build, tag and release to git, and publish the nifi-fds package to npm. This process should leave the src/ directory untouched and should produce any convenience bundled javascript or minified style sheets in a dist/ or target/ directory and should leverage npm scripts to automate tasks in order to accomplish this task. This process should be documented for future reference. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFIREG-127) Upgrade Apache NiFi Registry to Angular 5.2
[ https://issues.apache.org/jira/browse/NIFIREG-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Scott Aslan updated NIFIREG-127: Summary: Upgrade Apache NiFi Registry to Angular 5.2 (was: Remove old params and queryParams for paramMap and queryParamMap respectively.) > Upgrade Apache NiFi Registry to Angular 5.2 > --- > > Key: NIFIREG-127 > URL: https://issues.apache.org/jira/browse/NIFIREG-127 > Project: NiFi Registry > Issue Type: Improvement >Reporter: Scott Aslan >Assignee: Scott Aslan >Priority: Major > > Angular v4.4.6 ActivatedRoute https://v4.angular.io/api/router/ActivatedRoute > Makes two older properties still available. They are less capable than their > replacements, discouraged, and may be deprecated in a future Angular version. > *{{params}}* — An {{Observable}} that contains the required and [optional > parameters|https://v4.angular.io/guide/router#optional-route-parameters] > specific to the route. Use {{paramMap}} instead. > *{{queryParams}}* — An {{Observable}} that contains the [query > parameters|https://v4.angular.io/guide/router#query-parameters] available to > all routes. Use {{queryParamMap}} instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5044) SelectHiveQL accept only one statement
[ https://issues.apache.org/jira/browse/NIFI-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451516#comment-16451516 ] Ed Berezitsky commented on NIFI-5044: - There are some assumptions needs to be made, or more properties added. Let's assume, Pre-Query fails due to incorrect syntax, errors (like "_Error: Error while processing statement: Cannot modify var123 at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1__)"_), etc... Should we transfer to failure, or should we just create an attribute with list of errors for pre- and post- queries? Or we can define another param for a user to decide ("On pre/post Error": ignore/failure). [~pvillard], [~mattyb149], [~disoardi], your thoughts? Also, I see it's unassigned yet. Is anybody working on this? If not, I can take it. > SelectHiveQL accept only one statement > -- > > Key: NIFI-5044 > URL: https://issues.apache.org/jira/browse/NIFI-5044 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.2.0 >Reporter: Davide Isoardi >Priority: Critical > > In [this > |[https://github.com/apache/nifi/commit/bbc714e73ba245de7bc32fd9958667c847101f7d] > ] commit claims to add support to running multiple statements both on > SelectHiveQL and PutHiveQL; instead, it adds only the support to PutHiveQL, > so SelectHiveQL still lacks this important feature. @Matt Burgess, I saw that > you worked on that, is there any reason for this? If not, can we support it? > If I try to execute this query: > {quote}set hive.vectorized.execution.enabled = false; SELECT * FROM table_name > {quote} > I have this error: > > {quote}2018-04-05 13:35:40,572 ERROR [Timer-Driven Process Thread-146] > o.a.nifi.processors.hive.SelectHiveQL > SelectHiveQL[id=243d4c17-b1fe-14af--ee8ce15e] Unable to execute > HiveQL select query set hive.vectorized.execution.enabled = false; SELECT * > FROM table_name for > StandardFlowFileRecord[uuid=0e035558-07ce-473b-b0d4-ac00b8b1df93,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1522824912161-2753, > container=default, section=705], offset=838441, > length=25],offset=0,name=cliente_attributi.csv,size=25] due to > org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: > The query did not generate a result set!; routing to failure: {} > org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: > The query did not generate a result set! > at > org.apache.nifi.processors.hive.SelectHiveQL$2.process(SelectHiveQL.java:305) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529) > at > org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:275) > at > org.apache.nifi.processors.hive.SelectHiveQL.lambda$onTrigger$0(SelectHiveQL.java:215) > at > org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114) > at > org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:106) > at > org.apache.nifi.processors.hive.SelectHiveQL.onTrigger(SelectHiveQL.java:215) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.sql.SQLException: The query did not generate a result set! > at org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:438) > at > org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208) > at > org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208) > at > org.apache.nifi.processors.hive.SelectHiveQL$2.process(SelectHiveQL.java:293) > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4980) Typo in ReportAtlasLineage kafka kerberos service name property
[ https://issues.apache.org/jira/browse/NIFI-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451494#comment-16451494 ] ASF GitHub Bot commented on NIFI-4980: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2550 @pvillard31 @MikeThomsen I thought adding backward compatibility would make it complex, but it was not that much than I thought if I tried. I added another commit to add few more lines of code to support migration from old property to the new renamed one. It lets user to keep the reporting task running using the old configuration until they remove the old property. I think this approach will guide users to the right direction better than providing migration guide in release note. Thanks! > Typo in ReportAtlasLineage kafka kerberos service name property > --- > > Key: NIFI-4980 > URL: https://issues.apache.org/jira/browse/NIFI-4980 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Trivial > Attachments: nifi-4980-screenshot1.png, nifi-4980-screenshot2.png > > > "Kafka Kerberos Service Name" property name is > "kafka-kerberos-service-name-kafka". 'kafka' is redundant. > It should be "kafka-kerberos-service-name". > This is reported by [~nayakmahesh616]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2550: NIFI-4980: Typo in ReportAtlasLineage kafka kerberos servi...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2550 @pvillard31 @MikeThomsen I thought adding backward compatibility would make it complex, but it was not that much than I thought if I tried. I added another commit to add few more lines of code to support migration from old property to the new renamed one. It lets user to keep the reporting task running using the old configuration until they remove the old property. I think this approach will guide users to the right direction better than providing migration guide in release note. Thanks! ---
[jira] [Commented] (NIFI-4971) ReportLineageToAtlas 'complete path' strategy can miss one-time lineages
[ https://issues.apache.org/jira/browse/NIFI-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451426#comment-16451426 ] ASF GitHub Bot commented on NIFI-4971: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2542 @MikeThomsen I might have missed your points, but let me answer to your comments. For the lineage graphs showing two outgoing links from /tmp/in/test to two `GetFile, PutFile` processes going to the final /temp/out/test, I assume having two processes is what you are concerning. Can you share what the `qualified name` attributes of those two `nifi_flow_path` entities? I guess those have different qualified name and probably each created by 'simple path' and 'complete path' if you run the same flow with both strategies. For the comment of not seeing any lineage with 'simple_path', Atlas does not draw lineage if you choose an entity which is subclass of 'Process'. When you selected the 'nifi_flow_path' entity, didn't its input/output attribute have link to the `fs_path` entities? If you follow the link, then lineage will be shown from the `fs_path` entity, which is a subclass of 'DataSet'. Please let me know if above descriptions address your issues. Thanks! > ReportLineageToAtlas 'complete path' strategy can miss one-time lineages > > > Key: NIFI-4971 > URL: https://issues.apache.org/jira/browse/NIFI-4971 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > For the simplest example, with GetFlowFIle (GFF) -> PutFlowFile (PFF), where > GFF gets files and PFF saves those files into a different directory, then > following provenance events will be generated: > # GFF RECEIVE file1 > # PFF SEND file2 > From above provenance events, following entities and lineages should be > created in Atlas, labels in brackets are Atlas type names: > {code} > file1 (fs_path) -> GFF, PFF (nifi_flow_path) -> file2 (fs_path) > {code} > Entities shown in above graph are created. However, the 'nifi_flow_path' > entity do not have inputs/outputs referencing 'fs_path', so lineage can not > be seen in Atlas UI. > This issue was discovered by [~nayakmahesh616] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2542: NIFI-4971: ReportLineageToAtlas complete path can miss one...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2542 @MikeThomsen I might have missed your points, but let me answer to your comments. For the lineage graphs showing two outgoing links from /tmp/in/test to two `GetFile, PutFile` processes going to the final /temp/out/test, I assume having two processes is what you are concerning. Can you share what the `qualified name` attributes of those two `nifi_flow_path` entities? I guess those have different qualified name and probably each created by 'simple path' and 'complete path' if you run the same flow with both strategies. For the comment of not seeing any lineage with 'simple_path', Atlas does not draw lineage if you choose an entity which is subclass of 'Process'. When you selected the 'nifi_flow_path' entity, didn't its input/output attribute have link to the `fs_path` entities? If you follow the link, then lineage will be shown from the `fs_path` entity, which is a subclass of 'DataSet'. Please let me know if above descriptions address your issues. Thanks! ---
[jira] [Comment Edited] (NIFI-5113) Add XML record writer
[ https://issues.apache.org/jira/browse/NIFI-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451277#comment-16451277 ] Johannes Peter edited comment on NIFI-5113 at 4/24/18 9:59 PM: --- [~markap14] Hi Mark, I am wondering how we can solve the following issue: Assuming we have the following record: {code} MapRecord[{ID=1, NAME=Cleve Butler, AGE=42}] {code} Defining a schema for this is straightforward, as long as all keys shall be tags and all values shall be characters: Schema: {code} { "namespace": "nifi", "name": "PERSON", "type": "record", "fields": [ { "name": "ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} Result: {code} 1 Cleve Butler 42 {code} However, I am wondering, how the schema can be defined to write XML with ID as attribute: {code} Cleve Butler 42 {code} One way could be to instruct users to define a prefix for attributes via a property. Let's assume, the value of the property is "ATTR_". The schema then has to be defined like this: Schema: {code} { "namespace": "nifi", "name": "PERSON", "type": "record", "fields": [ { "name": "ATTR_ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} When WriteXMLResult is created, the schema is checked for fields starting with "ATTR_". Matching fields are replaced by fields without the prefix. The reference to these fields are put into a list. When the above record is written to XML, the writer can check for each field, whether its reference is contained in the list. If that is the case, the field is written to the XML as attribute. This is the best workaround I have identified so far. Do you have any other ideas? Are there already any plans to enhance records / schemas by metadata / attributes? was (Author: jope): [~markap14] Hi Mark, I am wondering how we can solve the following issue: Assuming we have the following record: {code} MapRecord[{ID=1, NAME=Cleve Butler, AGE=42}] {code} Defining a schema for this is straightforward, as long as all keys shall be tags and all values shall be characters: Schema: {code} { "namespace": "nifi", "name": "PERSON", "type": "record", "fields": [ { "name": "ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} Result: {code} 1 Cleve Butler 42 {code} However, I am wondering, how the schema can be defined to write XML with ID as attribute: {code} Cleve Butler 42 {code} One way could be to instruct users to define a prefix for attributes via a property. Let's assume, the value of the property is "ATTR_". The schema then has to be defined like this: Schema: {code} { "namespace": "nifi", "name": "PERSON", "type": "record", "fields": [ { "name": "ATTR_ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} When WriteXMLResult is created, the schema is checked for fields starting with "ATTR_". Matching fields are replaced by fields without the prefix. The reference to these fields is put into a list. When the above record is written to XML, the writer can check for each field, whether its reference is contained in the list. If that is the case, the field is written to the XML as attribute. This is the best workaround I have identified so far. Do you have any other ideas? Are there already any plans to enhance records / schemas by metadata / attributes? > Add XML record writer > - > > Key: NIFI-5113 > URL: https://issues.apache.org/jira/browse/NIFI-5113 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Johannes Peter >Assignee: Johannes Peter >Priority: Major > > Corresponding writer for the XML record reader -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (NIFI-5113) Add XML record writer
[ https://issues.apache.org/jira/browse/NIFI-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451277#comment-16451277 ] Johannes Peter edited comment on NIFI-5113 at 4/24/18 9:57 PM: --- [~markap14] Hi Mark, I am wondering how we can solve the following issue: Assuming we have the following record: {code} MapRecord[{ID=1, NAME=Cleve Butler, AGE=42}] {code} Defining a schema for this is straightforward, as long as all keys shall be tags and all values shall be characters: Schema: {code} { "namespace": "nifi", "name": "PERSON", "type": "record", "fields": [ { "name": "ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} Result: {code} 1 Cleve Butler 42 {code} However, I am wondering, how the schema can be defined to write XML with ID as attribute: {code} Cleve Butler 42 {code} One way could be to instruct users to define a prefix for attributes via a property. Let's assume, the value of the property is "ATTR_". The schema then has to be defined like this: Schema: {code} { "namespace": "nifi", "name": "PERSON", "type": "record", "fields": [ { "name": "ATTR_ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} When WriteXMLResult is created, the schema is checked for fields starting with "ATTR_". Matching fields are replaced by fields without the prefix. The reference to these fields is put into a list. When the above record is written to XML, the writer can check for each field, whether its reference is contained in the list. If that is the case, the field is written to the XML as attribute. This is the best workaround I have identified so far. Do you have any other ideas? Are there already any plans to enhance records / schemas by metadata / attributes? was (Author: jope): [~markap14] Hi Mark, I am wondering how we can solve the following issue: Assuming we have the following record: {code} MapRecord[{ID=1, NAME=Cleve Butler, AGE=42}] {code} Defining a schema for this is straightforward, as long as all keys shall be tags and all values shall be characters: Schema: {code} { "namespace": "nifi", "name": "PERSON", "type": "record", "fields": [ { "name": "ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} Result: {code} 1 Cleve Butler 42 {code} However, I am wondering, how the schema can be defined to write XML with ID as attribute: {code} Cleve Butler 42 {code} One way could be to instruct users to define a prefix for attributes via a property. Let's assume, the value of the property is "ATTR_". The schema then has to be defined like this: Schema: {code} { "namespace": "nifi", "name": "test", "type": "record", "fields": [ { "name": "ATTR_ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} When WriteXMLResult is created, the schema is checked for fields starting with "ATTR_". Matching fields are replaced by fields without the prefix. The reference to these fields is put into a list. When the above record is written to XML, the writer can check for each field, whether its reference is contained in the list. If that is the case, the field is written to the XML as attribute. This is the best workaround I have identified so far. Do you have any other ideas? Are there already any plans to enhance records / schemas by metadata / attributes? > Add XML record writer > - > > Key: NIFI-5113 > URL: https://issues.apache.org/jira/browse/NIFI-5113 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Johannes Peter >Assignee: Johannes Peter >Priority: Major > > Corresponding writer for the XML record reader -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (NIFI-5113) Add XML record writer
[ https://issues.apache.org/jira/browse/NIFI-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451277#comment-16451277 ] Johannes Peter edited comment on NIFI-5113 at 4/24/18 9:56 PM: --- [~markap14] Hi Mark, I am wondering how we can solve the following issue: Assuming we have the following record: {code} MapRecord[{ID=1, NAME=Cleve Butler, AGE=42}] {code} Defining a schema for this is straightforward, as long as all keys shall be tags and all values shall be characters: Schema: {code} { "namespace": "nifi", "name": "PERSON", "type": "record", "fields": [ { "name": "ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} Result: {code} 1 Cleve Butler 42 {code} However, I am wondering, how the schema can be defined to write XML with ID as attribute: {code} Cleve Butler 42 {code} One way could be to instruct users to define a prefix for attributes via a property. Let's assume, the value of the property is "ATTR_". The schema then has to be defined like this: Schema: {code} { "namespace": "nifi", "name": "test", "type": "record", "fields": [ { "name": "ATTR_ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} When WriteXMLResult is created, the schema is checked for fields starting with "ATTR_". Matching fields are replaced by fields without the prefix. The reference to these fields is put into a list. When the above record is written to XML, the writer can check for each field, whether its reference is contained in the list. If that is the case, the field is written to the XML as attribute. This is the best workaround I have identified so far. Do you have any other ideas? Are there already any plans to enhance records / schemas by metadata / attributes? was (Author: jope): [~markap14] Hi Mark, I am wondering how we can solve the following issue: Assuming we have the following record: {code} MapRecord[{ID=1, NAME=Cleve Butler, AGE=42}] {code} Defining a schema for this is straightforward, as long as all keys shall be tags and all values shall be characters: Schema: {code} { "namespace": "nifi", "name": "test", "type": "record", "fields": [ { "name": "ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} Result: {code} 1 Cleve Butler 42 {code} However, I am wondering, how the schema can be defined to write XML with ID as attribute: {code} Cleve Butler 42 {code} One way could be to instruct users to define a prefix for attributes via a property. Let's assume, the value of the property is "ATTR_". The schema then has to be defined like this: Schema: {code} { "namespace": "nifi", "name": "test", "type": "record", "fields": [ { "name": "ATTR_ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} When WriteXMLResult is created, the schema is checked for fields starting with "ATTR_". Matching fields are replaced by fields without the prefix. The reference to these fields is put into a list. When the above record is written to XML, the writer can check for each field, whether its reference is contained in the list. If that is the case, the field is written to the XML as attribute. This is the best workaround I have identified so far. Do you have any other ideas? Are there already any plans to enhance records / schemas by metadata / attributes? > Add XML record writer > - > > Key: NIFI-5113 > URL: https://issues.apache.org/jira/browse/NIFI-5113 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Johannes Peter >Assignee: Johannes Peter >Priority: Major > > Corresponding writer for the XML record reader -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (NIFI-5113) Add XML record writer
[ https://issues.apache.org/jira/browse/NIFI-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451277#comment-16451277 ] Johannes Peter edited comment on NIFI-5113 at 4/24/18 9:56 PM: --- [~markap14] Hi Mark, I am wondering how we can solve the following issue: Assuming we have the following record: {code} MapRecord[{ID=1, NAME=Cleve Butler, AGE=42}] {code} Defining a schema for this is straightforward, as long as all keys shall be tags and all values shall be characters: Schema: {code} { "namespace": "nifi", "name": "test", "type": "record", "fields": [ { "name": "ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} Result: {code} 1 Cleve Butler 42 {code} However, I am wondering, how the schema can be defined to write XML with ID as attribute: {code} Cleve Butler 42 {code} One way could be to instruct users to define a prefix for attributes via a property. Let's assume, the value of the property is "ATTR_". The schema then has to be defined like this: Schema: {code} { "namespace": "nifi", "name": "test", "type": "record", "fields": [ { "name": "ATTR_ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} When WriteXMLResult is created, the schema is checked for fields starting with "ATTR_". Matching fields are replaced by fields without the prefix. The reference to these fields is put into a list. When the above record is written to XML, the writer can check for each field, whether its reference is contained in the list. If that is the case, the field is written to the XML as attribute. This is the best workaround I have identified so far. Do you have any other ideas? Are there already any plans to enhance records / schemas by metadata / attributes? was (Author: jope): [~markap14] Hi Mark, I am wondering how I can solve the following issue: Assuming we have the following record: {code} MapRecord[{ID=1, NAME=Cleve Butler, AGE=42}] {code} Defining a schema for this is straightforward, as long as all keys shall be tags and all values shall be characters: Schema: {code} { "namespace": "nifi", "name": "test", "type": "record", "fields": [ { "name": "ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} Result: {code} 1 Cleve Butler 42 {code} However, I am wondering, how the schema can be defined to write XML with ID as attribute: {code} Cleve Butler 42 {code} One way could be to instruct users to define a prefix for attributes via a property. Let's assume, the value of the property is "ATTR_". The schema then has to be defined like this: Schema: {code} { "namespace": "nifi", "name": "test", "type": "record", "fields": [ { "name": "ATTR_ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} When WriteXMLResult is created, the schema is checked for fields starting with "ATTR_". Matching fields are replaced by fields without the prefix. The reference to these fields is put into a list. When the above record is written to XML, the writer can check for each field, whether its reference is contained in the list. If that is the case, the field is written to the XML as attribute. This is the best workaround I have identified so far. Do you have any other ideas? Are there already any plans to enhance records / schemas by metadata / attributes? > Add XML record writer > - > > Key: NIFI-5113 > URL: https://issues.apache.org/jira/browse/NIFI-5113 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Johannes Peter >Assignee: Johannes Peter >Priority: Major > > Corresponding writer for the XML record reader -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5113) Add XML record writer
[ https://issues.apache.org/jira/browse/NIFI-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451277#comment-16451277 ] Johannes Peter commented on NIFI-5113: -- [~markap14] Hi Mark, I am wondering how I can solve the following issue: Assuming we have the following record: {code} MapRecord[{ID=1, NAME=Cleve Butler, AGE=42}] {code} Defining a schema for this is straightforward, as long as all keys shall be tags and all values shall be characters: Schema: {code} { "namespace": "nifi", "name": "test", "type": "record", "fields": [ { "name": "ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} Result: {code} 1 Cleve Butler 42 {code} However, I am wondering, how the schema can be defined to write XML with ID as attribute: {code} Cleve Butler 42 {code} One way could be to instruct users to define a prefix for attributes via a property. Let's assume, the value of the property is "ATTR_". The schema then has to be defined like this: Schema: {code} { "namespace": "nifi", "name": "test", "type": "record", "fields": [ { "name": "ATTR_ID", "type": "string" }, { "name": "NAME", "type": "string" }, { "name": "AGE", "type": "int" }, { "name": "COUNTRY", "type": "string" } ] } {code} When WriteXMLResult is created, the schema is checked for fields starting with "ATTR_". Matching fields are replaced by fields without the prefix. The reference to these fields is put into a list. When the above record is written to XML, the writer can check for each field, whether its reference is contained in the list. If that is the case, the field is written to the XML as attribute. This is the best workaround I have identified so far. Do you have any other ideas? Are there already any plans to enhance records / schemas by metadata / attributes? > Add XML record writer > - > > Key: NIFI-5113 > URL: https://issues.apache.org/jira/browse/NIFI-5113 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Johannes Peter >Assignee: Johannes Peter >Priority: Major > > Corresponding writer for the XML record reader -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5118) AMQP Consumers/Processors add support for Nifi Expression Language
Edward Armes created NIFI-5118: -- Summary: AMQP Consumers/Processors add support for Nifi Expression Language Key: NIFI-5118 URL: https://issues.apache.org/jira/browse/NIFI-5118 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Edward Armes The AMQP Consumers and producer configurations don't currently support the Nifi expression language this prevents it from using the variable registry or service components that provide configuration properties -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5118) AMQP Consumers/Processors add support for Nifi Expression Language
[ https://issues.apache.org/jira/browse/NIFI-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Edward Armes updated NIFI-5118: --- Description: The AMQP Consumers and Producer processors don't currently support the Nifi expression language this prevents it from using the variable registry or service components that provide configuration properties (was: The AMQP Consumers and producer configurations don't currently support the Nifi expression language this prevents it from using the variable registry or service components that provide configuration properties) > AMQP Consumers/Processors add support for Nifi Expression Language > -- > > Key: NIFI-5118 > URL: https://issues.apache.org/jira/browse/NIFI-5118 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Edward Armes >Priority: Minor > > The AMQP Consumers and Producer processors don't currently support the Nifi > expression language this prevents it from using the variable registry or > service components that provide configuration properties -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5117) AMQP Consumer: Error during creation of Flow File results in lost message
Edward Armes created NIFI-5117: -- Summary: AMQP Consumer: Error during creation of Flow File results in lost message Key: NIFI-5117 URL: https://issues.apache.org/jira/browse/NIFI-5117 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Edward Armes The AMQP Consumer performs a "basicGet()". The was this basicGet is called results in the message being dequeued from the AMQP queue. If a processor instances fails to submit a flow file to the output as a result in the case of an error in "session.write()" or the processor is unexpectedly halted before the flow file is created and persisted, the message consumer from an AMQP queue is lost and can't be recovered. Reference: https://rabbitmq.github.io/rabbitmq-java-client/api/current/com/rabbitmq/client/Channel.html#basicGet-java.lang.String-boolean-: A potential fix here would be to: # AMQPConsumer.java: Change the call "basicGet(this.queueName, true)" -> "basicGet(this.queueName, false)" # AMQPConsumer.java: New method that wraps the basicAck() and basicNack() methods to taking a long (the delivery tag) and boolean (successes) if successes is true basicAck() is called is false basicNack() with requeue is called # ConsumerAMQP.java: An additional call(s) to "consumer" to call the new method as needed in case of successes and error. Happy to submit a patch if that helps -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-470) remove extensions list.
[ https://issues.apache.org/jira/browse/MINIFICPP-470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450440#comment-16450440 ] ASF GitHub Bot commented on MINIFICPP-470: -- Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/308 @apiri I assume most would arrive here via the ReadMe,but it doesn't hurt to add a link. I'm on the fence about the review process. there are some things that may not make sense to review. I was also hoping we'd find a better way than bootstrapping. Adding a pre-commit hook to alert to a new extension that should be added is on the docket > remove extensions list. > > > Key: MINIFICPP-470 > URL: https://issues.apache.org/jira/browse/MINIFICPP-470 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: marco polo >Assignee: marco polo >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #308: MINIFICPP-470: Remove old extensions listing
Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/308 @apiri I assume most would arrive here via the ReadMe,but it doesn't hurt to add a link. I'm on the fence about the review process. there are some things that may not make sense to review. I was also hoping we'd find a better way than bootstrapping. Adding a pre-commit hook to alert to a new extension that should be added is on the docket ---
[jira] [Commented] (MINIFICPP-470) remove extensions list.
[ https://issues.apache.org/jira/browse/MINIFICPP-470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450433#comment-16450433 ] ASF GitHub Bot commented on MINIFICPP-470: -- Github user apiri commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/308 Does it make sense to make a note of bootstrap containing the extensions and as a way of getting a listing? Should we also make it such that any extensions must go in bootstrap as a part of the review process? > remove extensions list. > > > Key: MINIFICPP-470 > URL: https://issues.apache.org/jira/browse/MINIFICPP-470 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: marco polo >Assignee: marco polo >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #308: MINIFICPP-470: Remove old extensions listing
Github user apiri commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/308 Does it make sense to make a note of bootstrap containing the extensions and as a way of getting a listing? Should we also make it such that any extensions must go in bootstrap as a part of the review process? ---
[jira] [Resolved] (NIFI-5108) Update to Commons Compress to 1.16.1
[ https://issues.apache.org/jira/browse/NIFI-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto resolved NIFI-5108. - Resolution: Fixed Fix Version/s: 1.7.0 > Update to Commons Compress to 1.16.1 > > > Key: NIFI-5108 > URL: https://issues.apache.org/jira/browse/NIFI-5108 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Joseph Witt >Assignee: Joseph Witt >Priority: Major > Fix For: 1.7.0 > > > https://commons.apache.org/proper/commons-compress/security-reports.html > {quote}./nar/framework/nifi-framework-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > ./nar/extensions/nifi-media-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.14.jar > > ./nar/extensions/nifi-avro-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hive-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-parquet-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-kudu-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-beats-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-record-serialization-services-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-confluent-platform-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-lumberjack-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-hbase_1_1_2-client-service-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-site-to-site-reporting-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-cassandra-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-libraries-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-standard-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hwx-schema-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > {quote} > And spring data commons 1.13.3 which is in our redis bundle needs to be > updated > https://pivotal.io/security/cve-2018-1273 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5108) Update to Commons Compress to 1.16.1
[ https://issues.apache.org/jira/browse/NIFI-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450418#comment-16450418 ] ASF GitHub Bot commented on NIFI-5108: -- Github user alopresto commented on the issue: https://github.com/apache/nifi/pull/2651 Ran `contrib-check` and all tests pass. Ran a simple flow using `UnpackContent` against the MiNiFi source zips and everything unpacked correctly. +1, merging. > Update to Commons Compress to 1.16.1 > > > Key: NIFI-5108 > URL: https://issues.apache.org/jira/browse/NIFI-5108 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Joseph Witt >Assignee: Joseph Witt >Priority: Major > > https://commons.apache.org/proper/commons-compress/security-reports.html > {quote}./nar/framework/nifi-framework-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > ./nar/extensions/nifi-media-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.14.jar > > ./nar/extensions/nifi-avro-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hive-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-parquet-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-kudu-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-beats-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-record-serialization-services-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-confluent-platform-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-lumberjack-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-hbase_1_1_2-client-service-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-site-to-site-reporting-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-cassandra-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-libraries-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-standard-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hwx-schema-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > {quote} > And spring data commons 1.13.3 which is in our redis bundle needs to be > updated > https://pivotal.io/security/cve-2018-1273 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2651: NIFI-5108 Updated all explicit refs and media nar usage of...
Github user alopresto commented on the issue: https://github.com/apache/nifi/pull/2651 Ran `contrib-check` and all tests pass. Ran a simple flow using `UnpackContent` against the MiNiFi source zips and everything unpacked correctly. +1, merging. ---
[jira] [Commented] (NIFI-5108) Update to Commons Compress to 1.16.1
[ https://issues.apache.org/jira/browse/NIFI-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450415#comment-16450415 ] ASF GitHub Bot commented on NIFI-5108: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2651 > Update to Commons Compress to 1.16.1 > > > Key: NIFI-5108 > URL: https://issues.apache.org/jira/browse/NIFI-5108 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Joseph Witt >Assignee: Joseph Witt >Priority: Major > > https://commons.apache.org/proper/commons-compress/security-reports.html > {quote}./nar/framework/nifi-framework-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > ./nar/extensions/nifi-media-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.14.jar > > ./nar/extensions/nifi-avro-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hive-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-parquet-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-kudu-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-beats-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-record-serialization-services-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-confluent-platform-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-lumberjack-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-hbase_1_1_2-client-service-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-site-to-site-reporting-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-cassandra-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-libraries-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-standard-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hwx-schema-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > {quote} > And spring data commons 1.13.3 which is in our redis bundle needs to be > updated > https://pivotal.io/security/cve-2018-1273 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2651: NIFI-5108 Updated all explicit refs and media nar u...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2651 ---
[jira] [Commented] (NIFI-5108) Update to Commons Compress to 1.16.1
[ https://issues.apache.org/jira/browse/NIFI-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450413#comment-16450413 ] ASF subversion and git services commented on NIFI-5108: --- Commit ac9944ccee7e085cba3c0addd7d8584f3863631f in nifi's branch refs/heads/master from [~joewitt] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=ac9944c ] NIFI-5108 Updated all explicit refs and media nar usage of commons-compress to latest version and updated spring redis client This closes #2651. Signed-off-by: Andy LoPresto> Update to Commons Compress to 1.16.1 > > > Key: NIFI-5108 > URL: https://issues.apache.org/jira/browse/NIFI-5108 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Joseph Witt >Assignee: Joseph Witt >Priority: Major > > https://commons.apache.org/proper/commons-compress/security-reports.html > {quote}./nar/framework/nifi-framework-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > ./nar/extensions/nifi-media-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.14.jar > > ./nar/extensions/nifi-avro-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hive-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-parquet-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-kudu-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-beats-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-record-serialization-services-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-confluent-platform-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-lumberjack-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-hbase_1_1_2-client-service-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-site-to-site-reporting-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-cassandra-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-libraries-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-standard-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hwx-schema-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > {quote} > And spring data commons 1.13.3 which is in our redis bundle needs to be > updated > https://pivotal.io/security/cve-2018-1273 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-432) ApplyTemplate is missing docs
[ https://issues.apache.org/jira/browse/MINIFICPP-432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450389#comment-16450389 ] ASF GitHub Bot commented on MINIFICPP-432: -- GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/311 MINIFICPP-432 Added docs for ApplyTemplate Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-432 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/311.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #311 commit e1cfba0075f8a90100fb73228fdfea6b9a7d72df Author: Andrew I. ChristiansonDate: 2018-04-24T18:55:55Z MINIFICPP-432 Added docs for ApplyTemplate > ApplyTemplate is missing docs > - > > Key: MINIFICPP-432 > URL: https://issues.apache.org/jira/browse/MINIFICPP-432 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > The ApplyTemplate processor needs documentation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #311: MINIFICPP-432 Added docs for ApplyTemplat...
GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/311 MINIFICPP-432 Added docs for ApplyTemplate Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-432 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/311.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #311 commit e1cfba0075f8a90100fb73228fdfea6b9a7d72df Author: Andrew I. ChristiansonDate: 2018-04-24T18:55:55Z MINIFICPP-432 Added docs for ApplyTemplate ---
[jira] [Commented] (MINIFICPP-451) Add additional deps to build instructions
[ https://issues.apache.org/jira/browse/MINIFICPP-451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450381#comment-16450381 ] ASF GitHub Bot commented on MINIFICPP-451: -- GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/310 MINIFICPP-451 Added additional build deps required for external projects Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-451 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/310.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #310 commit 4a7031bf659ac5f0e89a92c1d87198a34310f7b1 Author: Andrew I. ChristiansonDate: 2018-04-24T18:50:13Z MINIFICPP-451 Added additional build deps required for external projects > Add additional deps to build instructions > - > > Key: MINIFICPP-451 > URL: https://issues.apache.org/jira/browse/MINIFICPP-451 > Project: NiFi MiNiFi C++ > Issue Type: Documentation >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > yum install -y patch autoconf automake libtool -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #310: MINIFICPP-451 Added additional build deps...
GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/310 MINIFICPP-451 Added additional build deps required for external projects Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-451 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/310.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #310 commit 4a7031bf659ac5f0e89a92c1d87198a34310f7b1 Author: Andrew I. ChristiansonDate: 2018-04-24T18:50:13Z MINIFICPP-451 Added additional build deps required for external projects ---
[jira] [Commented] (MINIFICPP-464) Clarify GetUSBCamera docs on scheduling behavior
[ https://issues.apache.org/jira/browse/MINIFICPP-464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450372#comment-16450372 ] ASF GitHub Bot commented on MINIFICPP-464: -- GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/309 MINIFICPP-464 Clarify GetUSBCamera docs regarding scheduling behavior… … and image quality selection Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-464 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/309.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #309 commit 6fd30cad0145cb6d38b33f2aa96b07e77e5b0824 Author: Andrew I. ChristiansonDate: 2018-04-24T18:43:59Z MINIFICPP-464 Clarify GetUSBCamera docs regarding scheduling behavior and image quality selection > Clarify GetUSBCamera docs on scheduling behavior > > > Key: MINIFICPP-464 > URL: https://issues.apache.org/jira/browse/MINIFICPP-464 > Project: NiFi MiNiFi C++ > Issue Type: Documentation >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > The docs are currently unclear as to how GetUSBCamera gets frames from a > camera. Since onTrigger is a NOOP, the usual scheduling mechanisms have no > effect. > Additionally, clarify how image formats are automatically chosen (and > consider adding a new ticket/change to allow users to specify format), which > is currently undocumented. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #309: MINIFICPP-464 Clarify GetUSBCamera docs r...
GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/309 MINIFICPP-464 Clarify GetUSBCamera docs regarding scheduling behavior⦠⦠and image quality selection Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-464 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/309.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #309 commit 6fd30cad0145cb6d38b33f2aa96b07e77e5b0824 Author: Andrew I. ChristiansonDate: 2018-04-24T18:43:59Z MINIFICPP-464 Clarify GetUSBCamera docs regarding scheduling behavior and image quality selection ---
[jira] [Created] (NIFI-5116) Add ability for nifi-toolkit to prepare nifi.properties for use with CLI
Bryan Bende created NIFI-5116: - Summary: Add ability for nifi-toolkit to prepare nifi.properties for use with CLI Key: NIFI-5116 URL: https://issues.apache.org/jira/browse/NIFI-5116 Project: Apache NiFi Issue Type: Improvement Reporter: Bryan Bende Since the CLI needs to know the information to connect to a NiFi instance, and this will typically be put into a properties for re-use with CLI commands, it would be nice if there was an easy way to produce a CLI properties file from a given nifi.properties. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-470) remove extensions list.
[ https://issues.apache.org/jira/browse/MINIFICPP-470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450320#comment-16450320 ] ASF GitHub Bot commented on MINIFICPP-470: -- GitHub user phrocker opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/308 MINIFICPP-470: Remove old extensions listing Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-470 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/308.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #308 commit 7ccdf50281805452c1efe62f1319c29fec1a423c Author: Marc ParisiDate: 2018-04-24T18:06:25Z MINIFICPP-470: Remove old extensions listing > remove extensions list. > > > Key: MINIFICPP-470 > URL: https://issues.apache.org/jira/browse/MINIFICPP-470 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: marco polo >Assignee: marco polo >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #308: MINIFICPP-470: Remove old extensions list...
GitHub user phrocker opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/308 MINIFICPP-470: Remove old extensions listing Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-470 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/308.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #308 commit 7ccdf50281805452c1efe62f1319c29fec1a423c Author: Marc ParisiDate: 2018-04-24T18:06:25Z MINIFICPP-470: Remove old extensions listing ---
[jira] [Created] (MINIFICPP-470) remove extensions list.
marco polo created MINIFICPP-470: Summary: remove extensions list. Key: MINIFICPP-470 URL: https://issues.apache.org/jira/browse/MINIFICPP-470 Project: NiFi MiNiFi C++ Issue Type: Improvement Reporter: marco polo Assignee: marco polo -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-457) Network management controller service for interface binding for socket
[ https://issues.apache.org/jira/browse/MINIFICPP-457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450224#comment-16450224 ] ASF GitHub Bot commented on MINIFICPP-457: -- Github user achristianson commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/306 Is there a way to add a unit/integration test for this? > Network management controller service for interface binding for socket > -- > > Key: MINIFICPP-457 > URL: https://issues.apache.org/jira/browse/MINIFICPP-457 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: bqiu >Assignee: bqiu >Priority: Minor > > Network management controller service for interface binding for socket -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-469) Implement base64 encode/decode EL functions
[ https://issues.apache.org/jira/browse/MINIFICPP-469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450222#comment-16450222 ] ASF GitHub Bot commented on MINIFICPP-469: -- GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/307 MINIFICPP-469 Added encode/decode base64 EL functions Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-469 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/307.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #307 commit 18387e251a090bd4f38b9a5d5e7e287589363193 Author: Andrew I. ChristiansonDate: 2018-04-24T17:06:46Z MINIFICPP-469 Added encode/decode base64 EL functions > Implement base64 encode/decode EL functions > --- > > Key: MINIFICPP-469 > URL: https://issues.apache.org/jira/browse/MINIFICPP-469 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > * > [base64Encode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64encode] > * > [base64Decode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64decode] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #306: MINIFICPP-457: Network Management Controller Ser...
Github user achristianson commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/306 Is there a way to add a unit/integration test for this? ---
[GitHub] nifi-minifi-cpp pull request #307: MINIFICPP-469 Added encode/decode base64 ...
GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/307 MINIFICPP-469 Added encode/decode base64 EL functions Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-469 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/307.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #307 commit 18387e251a090bd4f38b9a5d5e7e287589363193 Author: Andrew I. ChristiansonDate: 2018-04-24T17:06:46Z MINIFICPP-469 Added encode/decode base64 EL functions ---
[jira] [Commented] (NIFI-4196) *S3 processors do not expose Proxy Authentication settings
[ https://issues.apache.org/jira/browse/NIFI-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450206#comment-16450206 ] ASF GitHub Bot commented on NIFI-4196: -- Github user ottobackwards commented on the issue: https://github.com/apache/nifi/pull/2016 @trixpan can you de-conflict? > *S3 processors do not expose Proxy Authentication settings > -- > > Key: NIFI-4196 > URL: https://issues.apache.org/jira/browse/NIFI-4196 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Andre F de Miranda >Assignee: Andre F de Miranda >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2016: NIFI-4196 - Expose AWS proxy authentication settings
Github user ottobackwards commented on the issue: https://github.com/apache/nifi/pull/2016 @trixpan can you de-conflict? ---
[jira] [Commented] (NIFI-5092) S2S Bulletin Reporting Task will not send recent bulletins after a restart
[ https://issues.apache.org/jira/browse/NIFI-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450187#comment-16450187 ] ASF GitHub Bot commented on NIFI-5092: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2643 Reviewing... > S2S Bulletin Reporting Task will not send recent bulletins after a restart > -- > > Key: NIFI-5092 > URL: https://issues.apache.org/jira/browse/NIFI-5092 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > Bulletins are stored in memory by NiFi on a 5-minutes rolling window. Each > bulletin has a bulletin ID and it starts with the value 0. Upon NiFi > restarts, the new generated bulletins will also have an ID starting with the > value 0. > Currently the reporting task is locally storing the ID of the last bulletin > sent through S2S. But this should not be the case because if the last > bulletin ID is X when NiFi is restarted, then all new generated bulletins > with ID <= X will be ignored. > The state management of this reporting task should be completely removed. > Starting/stopping the reporting task won't be impacting the behavior since > the ID of the last bulletin sent is stored in local variable. > *Current workaround* - after a NiFi restart, stop the reporting task, clear > the state of the reporting task and start the reporting task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2643: NIFI-5092 - Removed local state management for S2S Bulleti...
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2643 Reviewing... ---
[jira] [Assigned] (NIFI-5108) Update to Commons Compress to 1.16.1
[ https://issues.apache.org/jira/browse/NIFI-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt reassigned NIFI-5108: - Assignee: Joseph Witt (was: Sivaprasanna Sethuraman) > Update to Commons Compress to 1.16.1 > > > Key: NIFI-5108 > URL: https://issues.apache.org/jira/browse/NIFI-5108 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Joseph Witt >Assignee: Joseph Witt >Priority: Major > > https://commons.apache.org/proper/commons-compress/security-reports.html > {quote}./nar/framework/nifi-framework-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > ./nar/extensions/nifi-media-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.14.jar > > ./nar/extensions/nifi-avro-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hive-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-parquet-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-kudu-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-beats-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-record-serialization-services-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-confluent-platform-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-lumberjack-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-hbase_1_1_2-client-service-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-site-to-site-reporting-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-cassandra-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-libraries-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-standard-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hwx-schema-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > {quote} > And spring data commons 1.13.3 which is in our redis bundle needs to be > updated > https://pivotal.io/security/cve-2018-1273 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5041) Add convenient SPNEGO/Kerberos authentication support to LivySessionController
[ https://issues.apache.org/jira/browse/NIFI-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450155#comment-16450155 ] ASF GitHub Bot commented on NIFI-5041: -- Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2630#discussion_r183797310 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-hadoop-utils/src/main/java/org/apache/nifi/hadoop/KerberosConfiguration.java --- @@ -0,0 +1,52 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.hadoop; + +import org.apache.hadoop.security.authentication.util.KerberosUtil; + +import javax.security.auth.login.AppConfigurationEntry; +import java.util.HashMap; +import java.util.Map; + +/** + * Modified Kerberos configuration class from {@link org.apache.hadoop.security.authentication.client.KerberosAuthenticator.KerberosConfiguration} + * that requires authentication from a keytab. + */ +public class KerberosConfiguration extends javax.security.auth.login.Configuration { --- End diff -- I think we'll need an entry in the NOTICE for this class as it is derived from a class in hadoop-auth. See [here](https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-hive-bundle/nifi-hive-nar/src/main/resources/META-INF/NOTICE#L7) for an example of such attribution. @joewitt does that sound right? > Add convenient SPNEGO/Kerberos authentication support to LivySessionController > -- > > Key: NIFI-5041 > URL: https://issues.apache.org/jira/browse/NIFI-5041 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.5.0 >Reporter: Peter Toth >Priority: Minor > > Livy requires SPNEGO/Kerberos authentication on a secured cluster. Initiating > such an authentication from NiFi is a viable by providing a > java.security.auth.login.config system property > (https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/lab/part6.html), > but this is a bit cumbersome and needs kinit running outside of NiFi. > An alternative and more sophisticated solution would be to do the SPNEGO > negotiation programmatically. > * This solution would add some new properties to the LivySessionController > to fetch kerberos principal and password/keytab > * Add the required HTTP Negotiate header (with an SPNEGO token) to the > HttpURLConnection to do the authentication programmatically > (https://tools.ietf.org/html/rfc4559) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2630: NIFI-5041 Adds SPNEGO authentication to LivySession...
Github user mattyb149 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2630#discussion_r183797310 --- Diff: nifi-nar-bundles/nifi-extension-utils/nifi-hadoop-utils/src/main/java/org/apache/nifi/hadoop/KerberosConfiguration.java --- @@ -0,0 +1,52 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.hadoop; + +import org.apache.hadoop.security.authentication.util.KerberosUtil; + +import javax.security.auth.login.AppConfigurationEntry; +import java.util.HashMap; +import java.util.Map; + +/** + * Modified Kerberos configuration class from {@link org.apache.hadoop.security.authentication.client.KerberosAuthenticator.KerberosConfiguration} + * that requires authentication from a keytab. + */ +public class KerberosConfiguration extends javax.security.auth.login.Configuration { --- End diff -- I think we'll need an entry in the NOTICE for this class as it is derived from a class in hadoop-auth. See [here](https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-hive-bundle/nifi-hive-nar/src/main/resources/META-INF/NOTICE#L7) for an example of such attribution. @joewitt does that sound right? ---
[jira] [Updated] (MINIFICPP-423) Implement encode/decode EL functions
[ https://issues.apache.org/jira/browse/MINIFICPP-423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Christianson updated MINIFICPP-423: -- Description: [Encode/Decode Functions|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#encode] * -[escapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapejson]- * -[escapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapexml]- * -[escapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapecsv]- * -[escapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml3]- * -[escapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml4]- * -[unescapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapejson]- * -[unescapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapexml]- * -[unescapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapecsv]- * -[unescapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml3]- * -[unescapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml4]- * -[urlEncode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#urlencode]- * -[urlDecode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#urldecode]- * [base64Encode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64encode] * [base64Decode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64decode] was: [Encode/Decode Functions|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#encode] * -[escapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapejson]- * -[escapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapexml]- * -[escapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapecsv]- * -[escapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml3]- * -[escapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml4]- * -[unescapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapejson]- * -[unescapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapexml]- * -[unescapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapecsv]- * -[unescapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml3]- * -[unescapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml4]- * [urlEncode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#urlencode] * [urlDecode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#urldecode] * [base64Encode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64encode] * [base64Decode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64decode] > Implement encode/decode EL functions > > > Key: MINIFICPP-423 > URL: https://issues.apache.org/jira/browse/MINIFICPP-423 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > [Encode/Decode > Functions|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#encode] > * > -[escapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapejson]- > * > -[escapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapexml]- > * > -[escapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapecsv]- > * > -[escapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml3]- > * > -[escapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml4]- > * > -[unescapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapejson]- > * > -[unescapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapexml]- > * > -[unescapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapecsv]- > * > -[unescapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml3]- > * >
[jira] [Created] (MINIFICPP-469) Implement base64 encode/decode EL functions
Andrew Christianson created MINIFICPP-469: - Summary: Implement base64 encode/decode EL functions Key: MINIFICPP-469 URL: https://issues.apache.org/jira/browse/MINIFICPP-469 Project: NiFi MiNiFi C++ Issue Type: Improvement Reporter: Andrew Christianson Assignee: Andrew Christianson * [base64Encode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64encode] * [base64Decode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64decode] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-5108) Update to Commons Compress to 1.16.1
[ https://issues.apache.org/jira/browse/NIFI-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sivaprasanna Sethuraman reassigned NIFI-5108: - Assignee: Sivaprasanna Sethuraman > Update to Commons Compress to 1.16.1 > > > Key: NIFI-5108 > URL: https://issues.apache.org/jira/browse/NIFI-5108 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Joseph Witt >Assignee: Sivaprasanna Sethuraman >Priority: Major > > https://commons.apache.org/proper/commons-compress/security-reports.html > {quote}./nar/framework/nifi-framework-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > ./nar/extensions/nifi-media-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.14.jar > > ./nar/extensions/nifi-avro-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hive-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-parquet-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-kudu-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-beats-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-record-serialization-services-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-confluent-platform-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-lumberjack-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-hbase_1_1_2-client-service-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.4.1.jar > > ./nar/extensions/nifi-site-to-site-reporting-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-cassandra-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hadoop-libraries-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-standard-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.15.jar > > ./nar/extensions/nifi-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > > ./nar/extensions/nifi-hwx-schema-registry-nar-1.7.0-SNAPSHOT.nar-unpacked/META-INF/bundled-dependencies/commons-compress-1.8.1.jar > {quote} > And spring data commons 1.13.3 which is in our redis bundle needs to be > updated > https://pivotal.io/security/cve-2018-1273 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-5105) Update AWS SDK (Spring 2018)
[ https://issues.apache.org/jira/browse/NIFI-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sivaprasanna Sethuraman reassigned NIFI-5105: - Assignee: Sivaprasanna Sethuraman > Update AWS SDK (Spring 2018) > > > Key: NIFI-5105 > URL: https://issues.apache.org/jira/browse/NIFI-5105 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.6.0 >Reporter: James Wing >Assignee: Sivaprasanna Sethuraman >Priority: Minor > > Update the AWS SDK version used by nifi-aws-bundle to a recent SDK, with > support for newer AWS features, regions, etc. As part of the upgrade, we > should specify the individual SDK sub-component maven coordinates we actually > use, rather than the entire SDK, to reduce the size of binary distributions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5073) JMSConnectionFactory doesn't resolve 'variables' properly
[ https://issues.apache.org/jira/browse/NIFI-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sivaprasanna Sethuraman updated NIFI-5073: -- Status: Patch Available (was: In Progress) > JMSConnectionFactory doesn't resolve 'variables' properly > - > > Key: NIFI-5073 > URL: https://issues.apache.org/jira/browse/NIFI-5073 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.6.0, 1.5.0 >Reporter: Matthew Clarke >Assignee: Sivaprasanna Sethuraman >Priority: Major > Attachments: > 0001-NIFI-5073-JMSConnectionFactoryProvider-now-resolves-.patch > > > Create a new process Group. > Add "Variables" to the process group: > for example: > broker_uri=tcp://localhost:4141 > client_libs=/NiFi/custom-lib-dir/MQlib > con_factory=blah > Then while that process group is selected, create a controller service. > Create JMSConnectionFactory. > Configure this controller service to use EL for PG defined variables above: > ${con_factory}, ${con_factory}, and ${broker_uri} > The controller service will remain invalid because the EL statements are not > properly resolved to their set values. > Doing the exact same thing above using the external NiFi registry file works > as expected. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5073) JMSConnectionFactory doesn't resolve 'variables' properly
[ https://issues.apache.org/jira/browse/NIFI-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sivaprasanna Sethuraman updated NIFI-5073: -- Attachment: 0001-NIFI-5073-JMSConnectionFactoryProvider-now-resolves-.patch > JMSConnectionFactory doesn't resolve 'variables' properly > - > > Key: NIFI-5073 > URL: https://issues.apache.org/jira/browse/NIFI-5073 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0, 1.6.0 >Reporter: Matthew Clarke >Assignee: Sivaprasanna Sethuraman >Priority: Major > Attachments: > 0001-NIFI-5073-JMSConnectionFactoryProvider-now-resolves-.patch > > > Create a new process Group. > Add "Variables" to the process group: > for example: > broker_uri=tcp://localhost:4141 > client_libs=/NiFi/custom-lib-dir/MQlib > con_factory=blah > Then while that process group is selected, create a controller service. > Create JMSConnectionFactory. > Configure this controller service to use EL for PG defined variables above: > ${con_factory}, ${con_factory}, and ${broker_uri} > The controller service will remain invalid because the EL statements are not > properly resolved to their set values. > Doing the exact same thing above using the external NiFi registry file works > as expected. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-543) Provide extensions a way to indicate that they can run only on primary node, if clustered
[ https://issues.apache.org/jira/browse/NIFI-543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450119#comment-16450119 ] ASF GitHub Bot commented on NIFI-543: - Github user zenfenan commented on a diff in the pull request: https://github.com/apache/nifi/pull/2509#discussion_r183786332 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-processor-configuration.js --- @@ -741,8 +742,8 @@ } }); -// show the execution node option if we're cluster or we're currently configured to run on the primary node only -if (nfClusterSummary.isClustered() || executionNode === 'PRIMARY') { +// show the execution node option if we're clustered and execution node is not restricted to run only in primary node +if (nfClusterSummary.isClustered() && executionNodeRestricted !== true) { --- End diff -- @mcgilman @markap14 Bump... Any update? If you're okay, I can make the commit. > Provide extensions a way to indicate that they can run only on primary node, > if clustered > - > > Key: NIFI-543 > URL: https://issues.apache.org/jira/browse/NIFI-543 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework, Documentation Website, Extensions >Reporter: Mark Payne >Assignee: Sivaprasanna Sethuraman >Priority: Major > > There are Processors that are known to be problematic if run from multiple > nodes simultaneously. These processors should be able to use a > @PrimaryNodeOnly annotation (or something similar) to indicate that they can > be scheduled to run only on primary node if run in a cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2509: NIFI-543 Added annotation to indicate processor sho...
Github user zenfenan commented on a diff in the pull request: https://github.com/apache/nifi/pull/2509#discussion_r183786332 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-processor-configuration.js --- @@ -741,8 +742,8 @@ } }); -// show the execution node option if we're cluster or we're currently configured to run on the primary node only -if (nfClusterSummary.isClustered() || executionNode === 'PRIMARY') { +// show the execution node option if we're clustered and execution node is not restricted to run only in primary node +if (nfClusterSummary.isClustered() && executionNodeRestricted !== true) { --- End diff -- @mcgilman @markap14 Bump... Any update? If you're okay, I can make the commit. ---
[jira] [Updated] (NIFI-5073) JMSConnectionFactory doesn't resolve 'variables' properly
[ https://issues.apache.org/jira/browse/NIFI-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sivaprasanna Sethuraman updated NIFI-5073: -- Summary: JMSConnectionFactory doesn't resolve 'variables' properly (was: Process Group defined "variables" are not being properly used by in scope controller services) > JMSConnectionFactory doesn't resolve 'variables' properly > - > > Key: NIFI-5073 > URL: https://issues.apache.org/jira/browse/NIFI-5073 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0, 1.6.0 >Reporter: Matthew Clarke >Assignee: Sivaprasanna Sethuraman >Priority: Major > > Create a new process Group. > Add "Variables" to the process group: > for example: > broker_uri=tcp://localhost:4141 > client_libs=/NiFi/custom-lib-dir/MQlib > con_factory=blah > Then while that process group is selected, create a controller service. > Create JMSConnectionFactory. > Configure this controller service to use EL for PG defined variables above: > ${con_factory}, ${con_factory}, and ${broker_uri} > The controller service will remain invalid because the EL statements are not > properly resolved to their set values. > Doing the exact same thing above using the external NiFi registry file works > as expected. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5073) Process Group defined "variables" are not being properly used by in scope controller services
[ https://issues.apache.org/jira/browse/NIFI-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450106#comment-16450106 ] ASF GitHub Bot commented on NIFI-5073: -- GitHub user zenfenan opened a pull request: https://github.com/apache/nifi/pull/2653 NIFI-5073: JMSConnectionFactoryProvider now resolves EL Expression JMSConnectionFactoryProvider now resolves EL Expression from VariableRegistry. **Summary** - Replaced custom file validator with `StandardValidators.createURLorFileValidator` - `BROKER_URI` was not properly evaluated. Fixed it. - Added a unit test to confirm that EL expressions are properly evaluated and validated --- Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/zenfenan/nifi NIFI-5073 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2653.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2653 commit ab297714e64536625bbbe2f60262fd56be26d64a Author: zenfenanDate: 2018-04-24T15:35:05Z NIFI-5073: JMSConnectionFactoryProvider now resolves EL Expression from VariableRegistry > Process Group defined "variables" are not being properly used by in scope > controller services > - > > Key: NIFI-5073 > URL: https://issues.apache.org/jira/browse/NIFI-5073 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0, 1.6.0 >Reporter: Matthew Clarke >Assignee: Sivaprasanna Sethuraman >Priority: Major > > Create a new process Group. > Add "Variables" to the process group: > for example: > broker_uri=tcp://localhost:4141 > client_libs=/NiFi/custom-lib-dir/MQlib > con_factory=blah > Then while that process group is selected, create a controller service. > Create JMSConnectionFactory. > Configure this controller service to use EL for PG defined variables above: > ${con_factory}, ${con_factory}, and ${broker_uri} > The controller service will remain invalid because the EL statements are not > properly resolved to their set values. > Doing the exact same thing above using the external NiFi registry file works > as expected. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5073) Process Group defined "variables" are not being properly used by in scope controller services
[ https://issues.apache.org/jira/browse/NIFI-5073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450107#comment-16450107 ] Sivaprasanna Sethuraman commented on NIFI-5073: --- [~msclarke] As mentioned above, the problem is not actually "Same scope CS not able to use processor group 'variables'". The issue is with JMSConnectionFactoryProvider. I made the changes and raised a PR. I'm also changing the Jira title.. feel free to let me know, if have any concerns. > Process Group defined "variables" are not being properly used by in scope > controller services > - > > Key: NIFI-5073 > URL: https://issues.apache.org/jira/browse/NIFI-5073 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0, 1.6.0 >Reporter: Matthew Clarke >Assignee: Sivaprasanna Sethuraman >Priority: Major > > Create a new process Group. > Add "Variables" to the process group: > for example: > broker_uri=tcp://localhost:4141 > client_libs=/NiFi/custom-lib-dir/MQlib > con_factory=blah > Then while that process group is selected, create a controller service. > Create JMSConnectionFactory. > Configure this controller service to use EL for PG defined variables above: > ${con_factory}, ${con_factory}, and ${broker_uri} > The controller service will remain invalid because the EL statements are not > properly resolved to their set values. > Doing the exact same thing above using the external NiFi registry file works > as expected. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2653: NIFI-5073: JMSConnectionFactoryProvider now resolve...
GitHub user zenfenan opened a pull request: https://github.com/apache/nifi/pull/2653 NIFI-5073: JMSConnectionFactoryProvider now resolves EL Expression JMSConnectionFactoryProvider now resolves EL Expression from VariableRegistry. **Summary** - Replaced custom file validator with `StandardValidators.createURLorFileValidator` - `BROKER_URI` was not properly evaluated. Fixed it. - Added a unit test to confirm that EL expressions are properly evaluated and validated --- Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/zenfenan/nifi NIFI-5073 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2653.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2653 commit ab297714e64536625bbbe2f60262fd56be26d64a Author: zenfenanDate: 2018-04-24T15:35:05Z NIFI-5073: JMSConnectionFactoryProvider now resolves EL Expression from VariableRegistry ---
[jira] [Updated] (NIFI-4561) ExecuteSQL Stopped Returning FlowFile for non-ResultSet Queries
[ https://issues.apache.org/jira/browse/NIFI-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-4561: --- Fix Version/s: 1.7.0 > ExecuteSQL Stopped Returning FlowFile for non-ResultSet Queries > --- > > Key: NIFI-4561 > URL: https://issues.apache.org/jira/browse/NIFI-4561 > Project: Apache NiFi > Issue Type: Bug >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > Fix For: 1.7.0 > > > While most people use ExecuteSQL for Select statements, some JDBC drivers > allow you to execute any kind of statement, including multi-statement > requests. > This allowed users to submit multiple SQL statements in one JDBC Statement > and get back multiple result sets. This was part of the reason I wrote > [NIFI-3432]. > After having NIFI-3432 merged, I found that some request types no longer > cause a FlowFile to be generated because there is no ResultSet. Also, if > request types are mixed, such as an insert followed by a Select, then no > ResultSet is returned because the first result is not a result set but an > Update Count. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4561) ExecuteSQL Stopped Returning FlowFile for non-ResultSet Queries
[ https://issues.apache.org/jira/browse/NIFI-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-4561: --- Resolution: Fixed Status: Resolved (was: Patch Available) > ExecuteSQL Stopped Returning FlowFile for non-ResultSet Queries > --- > > Key: NIFI-4561 > URL: https://issues.apache.org/jira/browse/NIFI-4561 > Project: Apache NiFi > Issue Type: Bug >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > Fix For: 1.7.0 > > > While most people use ExecuteSQL for Select statements, some JDBC drivers > allow you to execute any kind of statement, including multi-statement > requests. > This allowed users to submit multiple SQL statements in one JDBC Statement > and get back multiple result sets. This was part of the reason I wrote > [NIFI-3432]. > After having NIFI-3432 merged, I found that some request types no longer > cause a FlowFile to be generated because there is no ResultSet. Also, if > request types are mixed, such as an insert followed by a Select, then no > ResultSet is returned because the first result is not a result set but an > Update Count. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4561) ExecuteSQL Stopped Returning FlowFile for non-ResultSet Queries
[ https://issues.apache.org/jira/browse/NIFI-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-4561: --- Status: Patch Available (was: Open) > ExecuteSQL Stopped Returning FlowFile for non-ResultSet Queries > --- > > Key: NIFI-4561 > URL: https://issues.apache.org/jira/browse/NIFI-4561 > Project: Apache NiFi > Issue Type: Bug >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > > While most people use ExecuteSQL for Select statements, some JDBC drivers > allow you to execute any kind of statement, including multi-statement > requests. > This allowed users to submit multiple SQL statements in one JDBC Statement > and get back multiple result sets. This was part of the reason I wrote > [NIFI-3432]. > After having NIFI-3432 merged, I found that some request types no longer > cause a FlowFile to be generated because there is no ResultSet. Also, if > request types are mixed, such as an insert followed by a Select, then no > ResultSet is returned because the first result is not a result set but an > Update Count. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5115) Allow scripted controller services to provide their own properties
Matt Burgess created NIFI-5115: -- Summary: Allow scripted controller services to provide their own properties Key: NIFI-5115 URL: https://issues.apache.org/jira/browse/NIFI-5115 Project: Apache NiFi Issue Type: Improvement Reporter: Matt Burgess InvokeScriptedProcessor allows the specified script to provide its own properties, which InvokeScriptedProcessor will ask for when providing all properties to the user. In this fashion the script can supply additional properties such as other Controller Services and properties. In contrast, ExecuteScript only has support for dynamic (i.e. user-defined) properties, as its script is only evaluated on each run of ExecuteScript. The scripted Controller Services are currently a hybrid of these two approaches. They are evaluated ahead of being executed, but they cannot provide their own properties to the scripted controller service dialog for the user. One use case is when you have a ScriptedReader that needs to support a Schema Registry. For a good user experience, the Schema Registry, Name/Text, Access Strategy properties (for example) should be able to be provided from the script and displayed to the user in the controller service dialog. This case tracks the additional capabilities of invoking various configuration methods on the scripted class as is done with InvokeScriptedProcessor, such as getSupportedPropertyDescriptors(), validate(), etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4035) Implement record-based Solr processors
[ https://issues.apache.org/jira/browse/NIFI-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449998#comment-16449998 ] ASF subversion and git services commented on NIFI-4035: --- Commit e3f4720797a9777264d1732b2e1475b8f344a8f0 in nifi's branch refs/heads/master from abhinavrohatgi30 [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e3f4720 ] NIFI-4035 - Adding PutSolrRecord Processor that reads NiFi records and indexes them into Solr as SolrDocuments Adding Test Cases for PutSolrRecord Processor Adding PutSolrRecord Processor in the list of Processors Resolving checkstyle errors Resolving checkstyle errors in test classes Adding License information and additional information about the processor 1. Implementing Batch Indexing 2. Changes for nested records 3. Removing MockRecordParser Fixing bugs with nested records Updating version of dependencies Setting Expression Language Scope This closes #2561. Signed-off-by: Bryan Bende> Implement record-based Solr processors > -- > > Key: NIFI-4035 > URL: https://issues.apache.org/jira/browse/NIFI-4035 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.2.0, 1.3.0 >Reporter: Bryan Bende >Priority: Minor > Fix For: 1.7.0 > > > Now that we have record readers and writers, we should implement variants of > the existing Solr processors that record-based... > Processors to consider: > * PutSolrRecord - uses a configured record reader to read an incoming flow > file and insert records to Solr > * GetSolrRecord - extracts records from Solr and uses a configured record > writer to write them to a flow file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-4035) Implement record-based Solr processors
[ https://issues.apache.org/jira/browse/NIFI-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende resolved NIFI-4035. --- Resolution: Fixed Fix Version/s: 1.7.0 > Implement record-based Solr processors > -- > > Key: NIFI-4035 > URL: https://issues.apache.org/jira/browse/NIFI-4035 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.2.0, 1.3.0 >Reporter: Bryan Bende >Priority: Minor > Fix For: 1.7.0 > > > Now that we have record readers and writers, we should implement variants of > the existing Solr processors that record-based... > Processors to consider: > * PutSolrRecord - uses a configured record reader to read an incoming flow > file and insert records to Solr > * GetSolrRecord - extracts records from Solr and uses a configured record > writer to write them to a flow file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4035) Implement record-based Solr processors
[ https://issues.apache.org/jira/browse/NIFI-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1644#comment-1644 ] ASF GitHub Bot commented on NIFI-4035: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2561 > Implement record-based Solr processors > -- > > Key: NIFI-4035 > URL: https://issues.apache.org/jira/browse/NIFI-4035 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.2.0, 1.3.0 >Reporter: Bryan Bende >Priority: Minor > Fix For: 1.7.0 > > > Now that we have record readers and writers, we should implement variants of > the existing Solr processors that record-based... > Processors to consider: > * PutSolrRecord - uses a configured record reader to read an incoming flow > file and insert records to Solr > * GetSolrRecord - extracts records from Solr and uses a configured record > writer to write them to a flow file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2561: NIFI-4035 Implement record-based Solr processors
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2561 ---
[jira] [Commented] (NIFI-4035) Implement record-based Solr processors
[ https://issues.apache.org/jira/browse/NIFI-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449992#comment-16449992 ] ASF GitHub Bot commented on NIFI-4035: -- Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2561 I was able to resolve the conflicts and everything looks good now, going to merge, thanks! > Implement record-based Solr processors > -- > > Key: NIFI-4035 > URL: https://issues.apache.org/jira/browse/NIFI-4035 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.2.0, 1.3.0 >Reporter: Bryan Bende >Priority: Minor > > Now that we have record readers and writers, we should implement variants of > the existing Solr processors that record-based... > Processors to consider: > * PutSolrRecord - uses a configured record reader to read an incoming flow > file and insert records to Solr > * GetSolrRecord - extracts records from Solr and uses a configured record > writer to write them to a flow file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2561: NIFI-4035 Implement record-based Solr processors
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2561 I was able to resolve the conflicts and everything looks good now, going to merge, thanks! ---
[jira] [Commented] (NIFI-4561) ExecuteSQL Stopped Returning FlowFile for non-ResultSet Queries
[ https://issues.apache.org/jira/browse/NIFI-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449953#comment-16449953 ] ASF GitHub Bot commented on NIFI-4561: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2243 > ExecuteSQL Stopped Returning FlowFile for non-ResultSet Queries > --- > > Key: NIFI-4561 > URL: https://issues.apache.org/jira/browse/NIFI-4561 > Project: Apache NiFi > Issue Type: Bug >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > > While most people use ExecuteSQL for Select statements, some JDBC drivers > allow you to execute any kind of statement, including multi-statement > requests. > This allowed users to submit multiple SQL statements in one JDBC Statement > and get back multiple result sets. This was part of the reason I wrote > [NIFI-3432]. > After having NIFI-3432 merged, I found that some request types no longer > cause a FlowFile to be generated because there is no ResultSet. Also, if > request types are mixed, such as an insert followed by a Select, then no > ResultSet is returned because the first result is not a result set but an > Update Count. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2243: NIFI-4561 ExecuteSQL returns no FlowFile for some q...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2243 ---
[jira] [Commented] (NIFI-4561) ExecuteSQL Stopped Returning FlowFile for non-ResultSet Queries
[ https://issues.apache.org/jira/browse/NIFI-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449951#comment-16449951 ] ASF subversion and git services commented on NIFI-4561: --- Commit 0390c0f1967d1a57a333d15e1ec41b06ceb88590 in nifi's branch refs/heads/master from [~patricker] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=0390c0f ] NIFI-4561 ExecuteSQL returns no FlowFile for some queries This closes #2243 Signed-off-by: Mike Thomsen> ExecuteSQL Stopped Returning FlowFile for non-ResultSet Queries > --- > > Key: NIFI-4561 > URL: https://issues.apache.org/jira/browse/NIFI-4561 > Project: Apache NiFi > Issue Type: Bug >Reporter: Peter Wicks >Assignee: Peter Wicks >Priority: Major > > While most people use ExecuteSQL for Select statements, some JDBC drivers > allow you to execute any kind of statement, including multi-statement > requests. > This allowed users to submit multiple SQL statements in one JDBC Statement > and get back multiple result sets. This was part of the reason I wrote > [NIFI-3432]. > After having NIFI-3432 merged, I found that some request types no longer > cause a FlowFile to be generated because there is no ResultSet. Also, if > request types are mixed, such as an insert followed by a Select, then no > ResultSet is returned because the first result is not a result set but an > Update Count. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4971) ReportLineageToAtlas 'complete path' strategy can miss one-time lineages
[ https://issues.apache.org/jira/browse/NIFI-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449947#comment-16449947 ] ASF GitHub Bot commented on NIFI-4971: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2542 On a related note, @ijokarumawak, when it was on the simple path I had no lineage show up at all. See what I mean? (Bottom screenshot is a hybrid of both runs) https://user-images.githubusercontent.com/108184/39193378-69634edc-47a9-11e8-9764-7d3c66ac7847.png;> https://user-images.githubusercontent.com/108184/39193438-8d5da1ca-47a9-11e8-9aac-ad6c32987877.png;> > ReportLineageToAtlas 'complete path' strategy can miss one-time lineages > > > Key: NIFI-4971 > URL: https://issues.apache.org/jira/browse/NIFI-4971 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > For the simplest example, with GetFlowFIle (GFF) -> PutFlowFile (PFF), where > GFF gets files and PFF saves those files into a different directory, then > following provenance events will be generated: > # GFF RECEIVE file1 > # PFF SEND file2 > From above provenance events, following entities and lineages should be > created in Atlas, labels in brackets are Atlas type names: > {code} > file1 (fs_path) -> GFF, PFF (nifi_flow_path) -> file2 (fs_path) > {code} > Entities shown in above graph are created. However, the 'nifi_flow_path' > entity do not have inputs/outputs referencing 'fs_path', so lineage can not > be seen in Atlas UI. > This issue was discovered by [~nayakmahesh616] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2542: NIFI-4971: ReportLineageToAtlas complete path can miss one...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2542 On a related note, @ijokarumawak, when it was on the simple path I had no lineage show up at all. See what I mean? (Bottom screenshot is a hybrid of both runs) https://user-images.githubusercontent.com/108184/39193378-69634edc-47a9-11e8-9764-7d3c66ac7847.png;> https://user-images.githubusercontent.com/108184/39193438-8d5da1ca-47a9-11e8-9aac-ad6c32987877.png;> ---
[jira] [Commented] (NIFI-4971) ReportLineageToAtlas 'complete path' strategy can miss one-time lineages
[ https://issues.apache.org/jira/browse/NIFI-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449945#comment-16449945 ] ASF GitHub Bot commented on NIFI-4971: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2542 Should also mention that this is how I set it up per the Jira ticket: https://user-images.githubusercontent.com/108184/39193271-26aaf70c-47a9-11e8-8a5b-ddf10652005f.png;> > ReportLineageToAtlas 'complete path' strategy can miss one-time lineages > > > Key: NIFI-4971 > URL: https://issues.apache.org/jira/browse/NIFI-4971 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > For the simplest example, with GetFlowFIle (GFF) -> PutFlowFile (PFF), where > GFF gets files and PFF saves those files into a different directory, then > following provenance events will be generated: > # GFF RECEIVE file1 > # PFF SEND file2 > From above provenance events, following entities and lineages should be > created in Atlas, labels in brackets are Atlas type names: > {code} > file1 (fs_path) -> GFF, PFF (nifi_flow_path) -> file2 (fs_path) > {code} > Entities shown in above graph are created. However, the 'nifi_flow_path' > entity do not have inputs/outputs referencing 'fs_path', so lineage can not > be seen in Atlas UI. > This issue was discovered by [~nayakmahesh616] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2542: NIFI-4971: ReportLineageToAtlas complete path can miss one...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2542 Should also mention that this is how I set it up per the Jira ticket: https://user-images.githubusercontent.com/108184/39193271-26aaf70c-47a9-11e8-8a5b-ddf10652005f.png;> ---
[jira] [Commented] (NIFI-4971) ReportLineageToAtlas 'complete path' strategy can miss one-time lineages
[ https://issues.apache.org/jira/browse/NIFI-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449940#comment-16449940 ] ASF GitHub Bot commented on NIFI-4971: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2542 I set everything up and set it to do `complete path.` This is what I got in Atlas: https://user-images.githubusercontent.com/108184/39193186-f45d018c-47a8-11e8-9774-5c958d81bcb3.png;> https://user-images.githubusercontent.com/108184/39193187-f46e1b0c-47a8-11e8-9601-2ae647d58353.png;> > ReportLineageToAtlas 'complete path' strategy can miss one-time lineages > > > Key: NIFI-4971 > URL: https://issues.apache.org/jira/browse/NIFI-4971 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > For the simplest example, with GetFlowFIle (GFF) -> PutFlowFile (PFF), where > GFF gets files and PFF saves those files into a different directory, then > following provenance events will be generated: > # GFF RECEIVE file1 > # PFF SEND file2 > From above provenance events, following entities and lineages should be > created in Atlas, labels in brackets are Atlas type names: > {code} > file1 (fs_path) -> GFF, PFF (nifi_flow_path) -> file2 (fs_path) > {code} > Entities shown in above graph are created. However, the 'nifi_flow_path' > entity do not have inputs/outputs referencing 'fs_path', so lineage can not > be seen in Atlas UI. > This issue was discovered by [~nayakmahesh616] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2542: NIFI-4971: ReportLineageToAtlas complete path can miss one...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2542 I set everything up and set it to do `complete path.` This is what I got in Atlas: https://user-images.githubusercontent.com/108184/39193186-f45d018c-47a8-11e8-9774-5c958d81bcb3.png;> https://user-images.githubusercontent.com/108184/39193187-f46e1b0c-47a8-11e8-9601-2ae647d58353.png;> ---
[jira] [Commented] (NIFI-4284) New attributes for FlowFiles to track from which processor it came
[ https://issues.apache.org/jira/browse/NIFI-4284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449888#comment-16449888 ] Aldrin Piri commented on NIFI-4284: --- Linking as it would be helpful to evaluate both at the same time as the functionality could be similar. > New attributes for FlowFiles to track from which processor it came > -- > > Key: NIFI-4284 > URL: https://issues.apache.org/jira/browse/NIFI-4284 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.1.0 >Reporter: Ujjawal Nayak >Priority: Major > > We should create some new attributes for FlowFiles that are routed to a > failure relationship, which will identify which component routed that > FlowFile. e.g. lastProcessorName and/or lastProcessorID. This would be > helpful in error / failure handling within NiFi. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4971) ReportLineageToAtlas 'complete path' strategy can miss one-time lineages
[ https://issues.apache.org/jira/browse/NIFI-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449823#comment-16449823 ] ASF GitHub Bot commented on NIFI-4971: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2542 I have to rebuild and setup Atlas. > ReportLineageToAtlas 'complete path' strategy can miss one-time lineages > > > Key: NIFI-4971 > URL: https://issues.apache.org/jira/browse/NIFI-4971 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > For the simplest example, with GetFlowFIle (GFF) -> PutFlowFile (PFF), where > GFF gets files and PFF saves those files into a different directory, then > following provenance events will be generated: > # GFF RECEIVE file1 > # PFF SEND file2 > From above provenance events, following entities and lineages should be > created in Atlas, labels in brackets are Atlas type names: > {code} > file1 (fs_path) -> GFF, PFF (nifi_flow_path) -> file2 (fs_path) > {code} > Entities shown in above graph are created. However, the 'nifi_flow_path' > entity do not have inputs/outputs referencing 'fs_path', so lineage can not > be seen in Atlas UI. > This issue was discovered by [~nayakmahesh616] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2542: NIFI-4971: ReportLineageToAtlas complete path can miss one...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2542 I have to rebuild and setup Atlas. ---
[jira] [Commented] (NIFI-4035) Implement record-based Solr processors
[ https://issues.apache.org/jira/browse/NIFI-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449795#comment-16449795 ] ASF GitHub Bot commented on NIFI-4035: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2561 @abhinavrohatgi30 While you were away, I merged another Solr-related commit and that's the reason you now have conflicts. > Implement record-based Solr processors > -- > > Key: NIFI-4035 > URL: https://issues.apache.org/jira/browse/NIFI-4035 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.2.0, 1.3.0 >Reporter: Bryan Bende >Priority: Minor > > Now that we have record readers and writers, we should implement variants of > the existing Solr processors that record-based... > Processors to consider: > * PutSolrRecord - uses a configured record reader to read an incoming flow > file and insert records to Solr > * GetSolrRecord - extracts records from Solr and uses a configured record > writer to write them to a flow file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2561: NIFI-4035 Implement record-based Solr processors
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2561 @abhinavrohatgi30 While you were away, I merged another Solr-related commit and that's the reason you now have conflicts. ---
[jira] [Commented] (NIFI-4952) JettyWebSocketClient websocket-uri property missing evaluateAttributeExpressions
[ https://issues.apache.org/jira/browse/NIFI-4952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449792#comment-16449792 ] ASF GitHub Bot commented on NIFI-4952: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2572 @claudiu-stanciu Looks like someone already made equivalent changes. When I rebased on master to bring it up to 1.7.0, the changes had already been added to master by another commit. So you can close this ticket. Thanks for the patch. > JettyWebSocketClient websocket-uri property missing > evaluateAttributeExpressions > > > Key: NIFI-4952 > URL: https://issues.apache.org/jira/browse/NIFI-4952 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Sven Van Hemel >Priority: Major > > The {{websocket-uri property}} of the > {{org.apache.nifi.websocket.jetty.JettyWebSocketClient}} class is marked as > EL enabled, but its evaluation in the {{startClient()}} method seems to be > missing a {{evaluateAttributeExpressions()}} call. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2572: NIFI-4952 Add EL support for JettyWebSocketClient and Jett...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2572 @claudiu-stanciu Looks like someone already made equivalent changes. When I rebased on master to bring it up to 1.7.0, the changes had already been added to master by another commit. So you can close this ticket. Thanks for the patch. ---
[jira] [Commented] (NIFIREG-140) Nifi Registry not able to start - NoClassDefFoundError org/apache/nifi/registry/util/FileUtils
[ https://issues.apache.org/jira/browse/NIFIREG-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449614#comment-16449614 ] ASF GitHub Bot commented on NIFIREG-140: GitHub user valerybonnet opened a pull request: https://github.com/apache/nifi-registry/pull/114 NIFIREG-140: Fix classpath for Windows You can merge this pull request into a Git repository by running: $ git pull https://github.com/valerybonnet/nifi-registry NIFIREG-140 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/114.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #114 commit ca1e0085c6dc3068d90a576308b5b988b21dc7b7 Author: valerybonnetDate: 2018-04-24T09:06:36Z NIFIREG-140: Fix classpath for Windows > Nifi Registry not able to start - NoClassDefFoundError > org/apache/nifi/registry/util/FileUtils > -- > > Key: NIFIREG-140 > URL: https://issues.apache.org/jira/browse/NIFIREG-140 > Project: NiFi Registry > Issue Type: Bug >Affects Versions: 0.1.0 >Reporter: Gaurang Shah >Priority: Major > > while trying to start the nifi registry I am getting following error. > nifi registry version: 0.1.0 > > {code:java} > 2018-02-06 00:11:52,665 INFO [main] org.apache.nifi.registry.NiFiRegistry > Launching NiFi Registry... > 2018-02-06 00:11:52,676 INFO [main] org.apache.nifi.registry.NiFiRegistry > Read property protection key from conf/bootstrap.conf > 2018-02-06 00:11:52,799 INFO [main] o.a.n.r.security.crypto.CryptoKeyLoader > No encryption key present in the bootstrap.conf file at > C:\nifi-registry-0.1.0-bin\nifi-registry-0.1.0\conf\bootstrap.conf > 2018-02-06 00:11:52,807 INFO [main] o.a.n.r.p.NiFiRegistryPropertiesLoader > Loaded 26 properties from > C:\nifi-registry-0.1.0-bin\nifi-registry-0.1.0\conf\nifi-registry.properties > 2018-02-06 00:11:52,811 INFO [main] org.apache.nifi.registry.NiFiRegistry > Loaded 26 properties > 2018-02-06 00:11:52,813 INFO [main] org.apache.nifi.registry.NiFiRegistry > NiFi Registry started without Bootstrap Port information provided; will not > listen for requests from Bootstrap > 2018-02-06 00:11:52,820 ERROR [main] org.apache.nifi.registry.NiFiRegistry > Failure to launch NiFi Registry due to java.lang.NoClassDefFoundError: > org/apache/nifi/registry/util/FileUtils > java.lang.NoClassDefFoundError: org/apache/nifi/registry/util/FileUtils > at org.apache.nifi.registry.NiFiRegistry.(NiFiRegistry.java:97) > ~[nifi-registry-runtime-0.1.0.jar:0.1.0] > at org.apache.nifi.registry.NiFiRegistry.main(NiFiRegistry.java:158) > ~[nifi-registry-runtime-0.1.0.jar:0.1.0] > Caused by: java.lang.ClassNotFoundException: > org.apache.nifi.registry.util.FileUtils > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_161] > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_161] > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338) > ~[na:1.8.0_161] > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_161] > ... 2 common frames omitted > 2018-02-06 00:11:52,824 INFO [Thread-1] org.apache.nifi.registry.NiFiRegistry > Initiating shutdown of Jetty web server... > 2018-02-06 00:11:52,824 INFO [Thread-1] org.apache.nifi.registry.NiFiRegistry > Jetty web server shutdown completed (nicely or otherwise). > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry pull request #114: NIFIREG-140: Fix classpath for Windows
GitHub user valerybonnet opened a pull request: https://github.com/apache/nifi-registry/pull/114 NIFIREG-140: Fix classpath for Windows You can merge this pull request into a Git repository by running: $ git pull https://github.com/valerybonnet/nifi-registry NIFIREG-140 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/114.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #114 commit ca1e0085c6dc3068d90a576308b5b988b21dc7b7 Author: valerybonnetDate: 2018-04-24T09:06:36Z NIFIREG-140: Fix classpath for Windows ---
[jira] [Updated] (NIFI-4516) Add QuerySolr processor
[ https://issues.apache.org/jira/browse/NIFI-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Johannes Peter updated NIFI-4516: - Summary: Add QuerySolr processor (was: Add FetchSolr processor) > Add QuerySolr processor > --- > > Key: NIFI-4516 > URL: https://issues.apache.org/jira/browse/NIFI-4516 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Johannes Peter >Assignee: Johannes Peter >Priority: Major > Labels: features > Fix For: 1.7.0 > > > The processor shall be capable > * to query Solr within a workflow, > * to make use of standard functionalities of Solr such as faceting, > highlighting, result grouping, etc., > * to make use of NiFis expression language to build Solr queries, > * to handle results as records. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4516) Add FetchSolr processor
[ https://issues.apache.org/jira/browse/NIFI-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Johannes Peter updated NIFI-4516: - Fix Version/s: 1.7.0 > Add FetchSolr processor > --- > > Key: NIFI-4516 > URL: https://issues.apache.org/jira/browse/NIFI-4516 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Johannes Peter >Assignee: Johannes Peter >Priority: Major > Labels: features > Fix For: 1.7.0 > > > The processor shall be capable > * to query Solr within a workflow, > * to make use of standard functionalities of Solr such as faceting, > highlighting, result grouping, etc., > * to make use of NiFis expression language to build Solr queries, > * to handle results as records. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-3576) QueryElasticsearchHttp should have a "Not Found"/"Zero results" relationship
[ https://issues.apache.org/jira/browse/NIFI-3576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-3576: - Fix Version/s: 1.7.0 > QueryElasticsearchHttp should have a "Not Found"/"Zero results" relationship > > > Key: NIFI-3576 > URL: https://issues.apache.org/jira/browse/NIFI-3576 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Joseph Percivall >Assignee: Otto Fowler >Priority: Minor > Fix For: 1.7.0 > > > In the event of a successful call, QueryElasticsearchHttp always drops the > incoming flowfile and then emits pages of results to the success > relationship. If the search returns no results then no pages of results are > emitted to the success relationship. > The processor should offer other options for handling when there are no > results returned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-3576) QueryElasticsearchHttp should have a "Not Found"/"Zero results" relationship
[ https://issues.apache.org/jira/browse/NIFI-3576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-3576: - Component/s: Extensions > QueryElasticsearchHttp should have a "Not Found"/"Zero results" relationship > > > Key: NIFI-3576 > URL: https://issues.apache.org/jira/browse/NIFI-3576 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Joseph Percivall >Assignee: Otto Fowler >Priority: Minor > Fix For: 1.7.0 > > > In the event of a successful call, QueryElasticsearchHttp always drops the > incoming flowfile and then emits pages of results to the success > relationship. If the search returns no results then no pages of results are > emitted to the success relationship. > The processor should offer other options for handling when there are no > results returned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5099) Changing the destination of a relation is not identified as "Local change" by NiFi registry
[ https://issues.apache.org/jira/browse/NIFI-5099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-5099: - Resolution: Fixed Status: Resolved (was: Patch Available) > Changing the destination of a relation is not identified as "Local change" by > NiFi registry > --- > > Key: NIFI-5099 > URL: https://issues.apache.org/jira/browse/NIFI-5099 > Project: Apache NiFi > Issue Type: Bug > Components: SDLC >Affects Versions: 1.6.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 1.7.0 > > > See - NIFIREG-165 > When the destination of a relation is changed, the NiFi relation does not > track it is a local change. > In a sample scenario, I have the flow as GenerateFF -> UpdateAttribute1 with > UpdateAttribute2 processor hanging alone. > After I commit the flow to NiFi registry, and then change the flow to direct > the output of GenerateFF processor to UpdateAttribute2, it is not identified > as a change by the registry. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5106) Add provenance reporting to GetSolr
[ https://issues.apache.org/jira/browse/NIFI-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-5106: - Fix Version/s: 1.7.0 > Add provenance reporting to GetSolr > --- > > Key: NIFI-5106 > URL: https://issues.apache.org/jira/browse/NIFI-5106 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Johannes Peter >Assignee: Johannes Peter >Priority: Minor > Fix For: 1.7.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5106) Add provenance reporting to GetSolr
[ https://issues.apache.org/jira/browse/NIFI-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-5106: - Component/s: Extensions > Add provenance reporting to GetSolr > --- > > Key: NIFI-5106 > URL: https://issues.apache.org/jira/browse/NIFI-5106 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Johannes Peter >Assignee: Johannes Peter >Priority: Minor > Fix For: 1.7.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5082) SQL processors do not handle Avro conversion of Oracle timestamps correctly
[ https://issues.apache.org/jira/browse/NIFI-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449495#comment-16449495 ] Pierre Villard commented on NIFI-5082: -- [~mike.thomsen] - when merging a PR, don't forget about closing the associated Jira by clicking "Resolve Issue" and setting a fix version :) Thanks! > SQL processors do not handle Avro conversion of Oracle timestamps correctly > --- > > Key: NIFI-5082 > URL: https://issues.apache.org/jira/browse/NIFI-5082 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.7.0 > > > In JdbcCommon (used by such processors as ExecuteSQL and QueryDatabaseTable), > if a ResultSet column is not a CLOB or BLOB, its value is retrieved using > getObject(), then further processing is done based on the SQL type and/or the > Java class of the value. > However, in Oracle when getObject() is called on a Timestamp column, it > returns an Oracle-specific TIMESTAMP class which does not inherit from > java.sql.Timestamp or java.sql.Date. Thus the processing "falls through" and > its value is attempted to be inserted as a string, which violates the Avro > schema (which correctly recognized it as a long of timestamp logical type). > At least for Oracle, the right way to process a Timestamp column is to call > getTimestamp() rather than getObject(), the former returns a > java.sql.Timestamp object which would correctly be processed by the current > code. I would hope that all drivers would support this but we would want to > test on (at least) MySQL, Oracle, and PostgreSQL. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5082) SQL processors do not handle Avro conversion of Oracle timestamps correctly
[ https://issues.apache.org/jira/browse/NIFI-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-5082: - Component/s: Extensions > SQL processors do not handle Avro conversion of Oracle timestamps correctly > --- > > Key: NIFI-5082 > URL: https://issues.apache.org/jira/browse/NIFI-5082 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.7.0 > > > In JdbcCommon (used by such processors as ExecuteSQL and QueryDatabaseTable), > if a ResultSet column is not a CLOB or BLOB, its value is retrieved using > getObject(), then further processing is done based on the SQL type and/or the > Java class of the value. > However, in Oracle when getObject() is called on a Timestamp column, it > returns an Oracle-specific TIMESTAMP class which does not inherit from > java.sql.Timestamp or java.sql.Date. Thus the processing "falls through" and > its value is attempted to be inserted as a string, which violates the Avro > schema (which correctly recognized it as a long of timestamp logical type). > At least for Oracle, the right way to process a Timestamp column is to call > getTimestamp() rather than getObject(), the former returns a > java.sql.Timestamp object which would correctly be processed by the current > code. I would hope that all drivers would support this but we would want to > test on (at least) MySQL, Oracle, and PostgreSQL. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5082) SQL processors do not handle Avro conversion of Oracle timestamps correctly
[ https://issues.apache.org/jira/browse/NIFI-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-5082: - Resolution: Fixed Fix Version/s: 1.7.0 Status: Resolved (was: Patch Available) > SQL processors do not handle Avro conversion of Oracle timestamps correctly > --- > > Key: NIFI-5082 > URL: https://issues.apache.org/jira/browse/NIFI-5082 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.7.0 > > > In JdbcCommon (used by such processors as ExecuteSQL and QueryDatabaseTable), > if a ResultSet column is not a CLOB or BLOB, its value is retrieved using > getObject(), then further processing is done based on the SQL type and/or the > Java class of the value. > However, in Oracle when getObject() is called on a Timestamp column, it > returns an Oracle-specific TIMESTAMP class which does not inherit from > java.sql.Timestamp or java.sql.Date. Thus the processing "falls through" and > its value is attempted to be inserted as a string, which violates the Avro > schema (which correctly recognized it as a long of timestamp logical type). > At least for Oracle, the right way to process a Timestamp column is to call > getTimestamp() rather than getObject(), the former returns a > java.sql.Timestamp object which would correctly be processed by the current > code. I would hope that all drivers would support this but we would want to > test on (at least) MySQL, Oracle, and PostgreSQL. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4971) ReportLineageToAtlas 'complete path' strategy can miss one-time lineages
[ https://issues.apache.org/jira/browse/NIFI-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449441#comment-16449441 ] ASF GitHub Bot commented on NIFI-4971: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2542 Thank you @MikeThomsen for jumping on this. Rebased with the latest master. > ReportLineageToAtlas 'complete path' strategy can miss one-time lineages > > > Key: NIFI-4971 > URL: https://issues.apache.org/jira/browse/NIFI-4971 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > For the simplest example, with GetFlowFIle (GFF) -> PutFlowFile (PFF), where > GFF gets files and PFF saves those files into a different directory, then > following provenance events will be generated: > # GFF RECEIVE file1 > # PFF SEND file2 > From above provenance events, following entities and lineages should be > created in Atlas, labels in brackets are Atlas type names: > {code} > file1 (fs_path) -> GFF, PFF (nifi_flow_path) -> file2 (fs_path) > {code} > Entities shown in above graph are created. However, the 'nifi_flow_path' > entity do not have inputs/outputs referencing 'fs_path', so lineage can not > be seen in Atlas UI. > This issue was discovered by [~nayakmahesh616] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2542: NIFI-4971: ReportLineageToAtlas complete path can miss one...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2542 Thank you @MikeThomsen for jumping on this. Rebased with the latest master. ---