[GitHub] nifi pull request #2436: Nifi 4819: Added support to delete blob from Azure ...

2018-01-25 Thread zenfenan
GitHub user zenfenan opened a pull request:

https://github.com/apache/nifi/pull/2436

Nifi 4819: Added support to delete blob from Azure Storage container

Implemented basic version that takes care of deleting an Azure blob from 
Storage container, if the blob exists. Also implemented Test.

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zenfenan/nifi NIFI-4819

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2436.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2436


commit d1290d03e68cac77e6e1f332368bbadc8ced9043
Author: zenfenaan 
Date:   2018-01-12T17:15:13Z

NIFI-4770 ListAzureBlobStorage now properly writes azure.container flowfile 
attribute

commit 615f3ebc10d94f37c7463ae2d240582397e6ce8c
Author: zenfenan 
Date:   2018-01-23T16:30:38Z

Merge branch 'master' of https://github.com/apache/nifi

commit f597a9b7aa11a48ff46931b73da216c8426542e4
Author: zenfenan 
Date:   2018-01-26T04:42:32Z

Merge remote-tracking branch 'nifi/master'

commit 391862cd1c32e763810bfecb8d5b813472ab96e3
Author: zenfenan 
Date:   2018-01-26T04:41:26Z

NIFI-4819: Added DeleteAzureBlobStorage that handles deletion of blob from 
an Azure Storage container




---


[jira] [Created] (NIFI-4819) Add support to delete blob from Azure Storage container

2018-01-25 Thread zenfenaan (JIRA)
zenfenaan created NIFI-4819:
---

 Summary: Add support to delete blob from Azure Storage container
 Key: NIFI-4819
 URL: https://issues.apache.org/jira/browse/NIFI-4819
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: zenfenaan


Implement a delete processor that handles deleting blob from Azure Storage 
container. This should be an extension of nifi-azure-nar bundle. Currently, the 
azure bundle's storage processors has support to list, fetch, put Azure Storage 
blobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2435: NIFI-4818: Fix transit URL parsing at Hive2JDBC and...

2018-01-25 Thread ijokarumawak
Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2435#discussion_r164017503
  
--- Diff: 
nifi-nar-bundles/nifi-atlas-bundle/nifi-atlas-reporting-task/src/main/java/org/apache/nifi/atlas/NiFiAtlasHook.java
 ---
@@ -255,7 +255,11 @@ public void commitMessages() {
 }
 return new Tuple<>(refQualifiedName, 
typedQualifiedNameToRef.get(toTypedQualifiedName(typeName, refQualifiedName)));
 }).filter(Objects::nonNull).filter(tuple -> tuple.getValue() != 
null)
-.collect(Collectors.toMap(Tuple::getKey, Tuple::getValue));
+// If duplication happens, use new value.
+.collect(Collectors.toMap(Tuple::getKey, Tuple::getValue, 
(oldValue, newValue) -> {
+logger.warn("Duplicated qualified name was found, use 
the new one. oldValue={}, newValue={}", new Object[]{oldValue, newValue});
+return newValue;
+}));
--- End diff --

While I was testing, I got the following exception:
```
2018-01-25 05:06:41,430 ERROR [Timer-Driven Process Thread-1] 
o.a.n.a.reporting.ReportLineageToAtlas 
ReportLineageToAtlas[id=057986ae-0161-1000-d0b0-1b890a17f5aa] Error running 
task ReportLineageToAtlas[id=057986ae-0161-1000-d0b0-1b890a17f5aa] due to 
java.lang.IllegalStateException: Duplicate key {Id='(type: fs_path, id: 
69be7a40-4ff8-4c4e-b714-2d394c14398d)', traits=[], values={}} NiFiAtlasHook.258
```
The exception means, an existing nifi_flow_path entity has more than one 
entries having pointing to the same entity having identical qualified name, 
from its inputs or outputs attribute. This happened because I was using the old 
test environment which has data created before Atlas integration implemented 
de-duplication logic. However, it would be more protective to handle such 
duplication in case if this occurs for some other reason.


---


[jira] [Commented] (NIFI-4818) Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340426#comment-16340426
 ] 

ASF GitHub Bot commented on NIFI-4818:
--

Github user ijokarumawak commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2435#discussion_r164017503
  
--- Diff: 
nifi-nar-bundles/nifi-atlas-bundle/nifi-atlas-reporting-task/src/main/java/org/apache/nifi/atlas/NiFiAtlasHook.java
 ---
@@ -255,7 +255,11 @@ public void commitMessages() {
 }
 return new Tuple<>(refQualifiedName, 
typedQualifiedNameToRef.get(toTypedQualifiedName(typeName, refQualifiedName)));
 }).filter(Objects::nonNull).filter(tuple -> tuple.getValue() != 
null)
-.collect(Collectors.toMap(Tuple::getKey, Tuple::getValue));
+// If duplication happens, use new value.
+.collect(Collectors.toMap(Tuple::getKey, Tuple::getValue, 
(oldValue, newValue) -> {
+logger.warn("Duplicated qualified name was found, use 
the new one. oldValue={}, newValue={}", new Object[]{oldValue, newValue});
+return newValue;
+}));
--- End diff --

While I was testing, I got the following exception:
```
2018-01-25 05:06:41,430 ERROR [Timer-Driven Process Thread-1] 
o.a.n.a.reporting.ReportLineageToAtlas 
ReportLineageToAtlas[id=057986ae-0161-1000-d0b0-1b890a17f5aa] Error running 
task ReportLineageToAtlas[id=057986ae-0161-1000-d0b0-1b890a17f5aa] due to 
java.lang.IllegalStateException: Duplicate key {Id='(type: fs_path, id: 
69be7a40-4ff8-4c4e-b714-2d394c14398d)', traits=[], values={}} NiFiAtlasHook.258
```
The exception means, an existing nifi_flow_path entity has more than one 
entries having pointing to the same entity having identical qualified name, 
from its inputs or outputs attribute. This happened because I was using the old 
test environment which has data created before Atlas integration implemented 
de-duplication logic. However, it would be more protective to handle such 
duplication in case if this occurs for some other reason.


> Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas
> 
>
> Key: NIFI-4818
> URL: https://issues.apache.org/jira/browse/NIFI-4818
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>Priority: Major
>
> ReportLineageToAtlas parses Hive JDBC connection URLs to get database names. 
> It works if a connection URL does not have parameters. (e.g. 
> jdbc:hive2://host:port/dbName) But it reports wrong database name if there 
> are parameters. E.g. with 
> jdbc:hive2://host.port/dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2,
>  the reported database name will be 
> dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2, 
> including the connection parameters.
> Also, if there are more than one host:port defined, it will not be able to 
> analyze cluster name from hostnames correctly.
> Similarly for Kafka topic, the reporting task uses transit URIs to analyze 
> hostnames and topic names. It does handle multiple host:port definitions 
> within a URI, however, current logic only uses the first hostname entry even 
> if there are multiple ones. For example, with a transit URI, 
> "PLAINTEXT://0.example.com:6667,1.example.com:6667/topicA", it uses 
> "0.example.com" to match configured regular expressions to derive a cluster 
> name. If none of regex matches, then it uses the default cluster name without 
> looping through all hostnames. It never uses the 2nd or later hostnames to 
> derive a cluster name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4818) Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340419#comment-16340419
 ] 

ASF GitHub Bot commented on NIFI-4818:
--

GitHub user ijokarumawak opened a pull request:

https://github.com/apache/nifi/pull/2435

NIFI-4818: Fix transit URL parsing at Hive2JDBC and KafkaTopic for Re…

…portLineageToAtlas

- Hive2JDBC: Handle connection parameters and multiple host entries
correctly
- KafkaTopic: Handle multiple host entries correctly
- Avoid potential "IllegalStateException: Duplicate key" exception
when NiFiAtlasHook analyzes existing NiFiFlowPath input/output entries

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijokarumawak/nifi nifi-4818

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2435.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2435


commit 91ee885398128e796918a6fc6b98bdf442c5ebf1
Author: Koji Kawamura 
Date:   2018-01-25T04:57:01Z

NIFI-4818: Fix transit URL parsing at Hive2JDBC and KafkaTopic for 
ReportLineageToAtlas

- Hive2JDBC: Handle connection parameters and multiple host entries
correctly
- KafkaTopic: Handle multiple host entries correctly
- Avoid potential "IllegalStateException: Duplicate key" exception
when NiFiAtlasHook analyzes existing NiFiFlowPath input/output entries




> Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas
> 
>
> Key: NIFI-4818
> URL: https://issues.apache.org/jira/browse/NIFI-4818
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Koji Kawamura
>Assignee: Koji Kawamura
>Priority: Major
>
> ReportLineageToAtlas parses Hive JDBC connection URLs to get database names. 
> It works if a connection URL does not have parameters. (e.g. 
> jdbc:hive2://host:port/dbName) But it reports wrong database name if there 
> are parameters. E.g. with 
> jdbc:hive2://host.port/dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2,
>  the reported database name will be 
> dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2, 
> including the connection parameters.
> Also, if there are more than one host:port defined, it will not be able to 
> analyze cluster name from hostnames correctly.
> Similarly for Kafka topic, the reporting task uses transit URIs to analyze 
> hostnames and topic names. It does handle multiple host:port definitions 
> within a URI, however, current logic only uses the first hostname entry even 
> if there are multiple ones. For example, with a transit URI, 
> "PLAINTEXT://0.example.com:6667,1.example.com:6667/topicA", it uses 
> "0.example.com" to match configured regular expressions to derive a 

[GitHub] nifi pull request #2435: NIFI-4818: Fix transit URL parsing at Hive2JDBC and...

2018-01-25 Thread ijokarumawak
GitHub user ijokarumawak opened a pull request:

https://github.com/apache/nifi/pull/2435

NIFI-4818: Fix transit URL parsing at Hive2JDBC and KafkaTopic for Re…

…portLineageToAtlas

- Hive2JDBC: Handle connection parameters and multiple host entries
correctly
- KafkaTopic: Handle multiple host entries correctly
- Avoid potential "IllegalStateException: Duplicate key" exception
when NiFiAtlasHook analyzes existing NiFiFlowPath input/output entries

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijokarumawak/nifi nifi-4818

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2435.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2435


commit 91ee885398128e796918a6fc6b98bdf442c5ebf1
Author: Koji Kawamura 
Date:   2018-01-25T04:57:01Z

NIFI-4818: Fix transit URL parsing at Hive2JDBC and KafkaTopic for 
ReportLineageToAtlas

- Hive2JDBC: Handle connection parameters and multiple host entries
correctly
- KafkaTopic: Handle multiple host entries correctly
- Avoid potential "IllegalStateException: Duplicate key" exception
when NiFiAtlasHook analyzes existing NiFiFlowPath input/output entries




---


[jira] [Created] (NIFI-4818) Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas

2018-01-25 Thread Koji Kawamura (JIRA)
Koji Kawamura created NIFI-4818:
---

 Summary: Fix transit URL parsing at Hive2JDBC and KafkaTopic for 
ReportLineageToAtlas
 Key: NIFI-4818
 URL: https://issues.apache.org/jira/browse/NIFI-4818
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.5.0
Reporter: Koji Kawamura
Assignee: Koji Kawamura


ReportLineageToAtlas parses Hive JDBC connection URLs to get database names. It 
works if a connection URL does not have parameters. (e.g. 
jdbc:hive2://host:port/dbName) But it reports wrong database name if there are 
parameters. (e.g. 
jdbc:hive2://host.port/dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2)

Also, if there are more than one host:port defined, it will not be able to 
analyze cluster name from hostnames correctly.

Similarly for Kafka topic, the reporting task uses transit URIs to analyze 
hostnames and topic names. It does handle multiple host:port definitions within 
a URI, however, current logic only uses the first hostname entry even if there 
are multiple ones. For example, with a transit URI, 
"PLAINTEXT://0.example.com:6667,1.example.com:6667/topicA", it uses 
"0.example.com" to match configured regular expressions to derive a cluster 
name. If none of regex matches, then it uses the default cluster name without 
looping through all hostnames. It never uses the 2nd or later hostnames to 
derive a cluster name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFI-4817) GetFile is missing a context.yield when using "Polling Interval", causing it to spin

2018-01-25 Thread Joseph Percivall (JIRA)
Joseph Percivall created NIFI-4817:
--

 Summary: GetFile is missing a context.yield when using "Polling 
Interval", causing it to spin
 Key: NIFI-4817
 URL: https://issues.apache.org/jira/browse/NIFI-4817
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Joseph Percivall


The "Polling Interval" property is used so that the listing of the directory 
isn't done on every onTrigger of the processor. The actual check is here[1]. In 
the event, the fileQueue is empty and it's not yet time to list the directory 
again, the processor returns here[2]. Since there isn't a context.yield before 
the return, the processor (assuming the default scheduling period of '0 sec') 
will immediately attempt to do the same thing. Causing the processor to spin 
and take up as many concurrent tasks as it's allowed.

A "context.yield" should be added before the return so that the processor 
doesn't just spin between listings.

 

[1] 
https://github.com/apache/nifi/blob/7f5eabd603bfc326dadc35590bbe69304e8c90fa/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetFile.java#L372

[2] 
https://github.com/apache/nifi/blob/7f5eabd603bfc326dadc35590bbe69304e8c90fa/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GetFile.java#L406



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2365: NIFI-4092: Removing direct dependency on jaxb

2018-01-25 Thread joewitt
Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2365
  
@ravipapisetti it has been resolved and is in the nifi 1.5.0 release.  
You'll need to upgrade to that version. 


---


[jira] [Commented] (NIFI-4092) ClassCastException Warning during cluster sync

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340202#comment-16340202
 ] 

ASF GitHub Bot commented on NIFI-4092:
--

Github user joewitt commented on the issue:

https://github.com/apache/nifi/pull/2365
  
@ravipapisetti it has been resolved and is in the nifi 1.5.0 release.  
You'll need to upgrade to that version. 


> ClassCastException Warning during cluster sync
> --
>
> Key: NIFI-4092
> URL: https://issues.apache.org/jira/browse/NIFI-4092
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Joseph Gresock
>Assignee: Matt Gilman
>Priority: Major
>  Labels: release-notes
> Fix For: 1.5.0
>
>
> This is the strack trace I receive, though I'm not sure it affects anything, 
> since the cluster is eventually able to connect.
> 2017-06-20 13:46:44,680 WARN [Reconnect ip-172-31-55-36.ec2.internal:8443] 
> o.a.n.c.c.node.NodeClusterCoordinator Problem encountered issuing 
> reconnection request to node ip-172-31-55-36.ec2.internal:8443
> java.io.IOException: 
> org.apache.nifi.controller.serialization.FlowSerializationException: 
> java.lang.ClassCastException: 
> org.apache.nifi.web.api.dto.TemplateDTO$JaxbAccessorM_getDescription_setDescription_java_lang_String
>  cannot be cast to com.sun.xml.internal.bind.v2.runtime.reflect.Accessor
> at 
> org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.save(StandardXMLFlowConfigurationDAO.java:143)
> at 
> org.apache.nifi.controller.StandardFlowService.createDataFlowFromController(StandardFlowService.java:607)
> at 
> org.apache.nifi.controller.StandardFlowService.createDataFlowFromController(StandardFlowService.java:100)
> at 
> org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator$2.run(NodeClusterCoordinator.java:706)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.nifi.controller.serialization.FlowSerializationException: 
> java.lang.ClassCastException: 
> org.apache.nifi.web.api.dto.TemplateDTO$JaxbAccessorM_getDescription_setDescription_java_lang_String
>  cannot be cast to com.sun.xml.internal.bind.v2.runtime.reflect.Accessor
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addTemplate(StandardFlowSerializer.java:546)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addProcessGroup(StandardFlowSerializer.java:203)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addProcessGroup(StandardFlowSerializer.java:187)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addProcessGroup(StandardFlowSerializer.java:187)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.serialize(StandardFlowSerializer.java:97)
> at 
> org.apache.nifi.controller.FlowController.serialize(FlowController.java:1544)
> at 
> org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.save(StandardXMLFlowConfigurationDAO.java:141)
> ... 4 common frames omitted
> Caused by: java.lang.ClassCastException: 
> org.apache.nifi.web.api.dto.TemplateDTO$JaxbAccessorM_getDescription_setDescription_java_lang_String
>  cannot be cast to com.sun.xml.internal.bind.v2.runtime.reflect.Accessor
> at 
> com.sun.xml.internal.bind.v2.runtime.reflect.opt.OptimizedAccessorFactory.instanciate(OptimizedAccessorFactory.java:190)
> at 
> com.sun.xml.internal.bind.v2.runtime.reflect.opt.OptimizedAccessorFactory.get(OptimizedAccessorFactory.java:129)
> at 
> com.sun.xml.internal.bind.v2.runtime.reflect.Accessor$GetterSetterReflection.optimize(Accessor.java:388)
> at 
> com.sun.xml.internal.bind.v2.runtime.property.SingleElementLeafProperty.(SingleElementLeafProperty.java:77)
> at sun.reflect.GeneratedConstructorAccessor435.newInstance(Unknown 
> Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> com.sun.xml.internal.bind.v2.runtime.property.PropertyFactory.create(PropertyFactory.java:113)
> at 
> com.sun.xml.internal.bind.v2.runtime.ClassBeanInfoImpl.(ClassBeanInfoImpl.java:166)
> at 
> com.sun.xml.internal.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:488)
> at 
> com.sun.xml.internal.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:305)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4092) ClassCastException Warning during cluster sync

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340196#comment-16340196
 ] 

ASF GitHub Bot commented on NIFI-4092:
--

Github user ravipapisetti commented on the issue:

https://github.com/apache/nifi/pull/2365
  
Hi, We are hitting the same issue with NiFi 1.3. Any workaround how this 
could be solved. Interesting to know this happens only in one of the node in 
the cluster, even though all nodes are identical. Appreciate any help.



> ClassCastException Warning during cluster sync
> --
>
> Key: NIFI-4092
> URL: https://issues.apache.org/jira/browse/NIFI-4092
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Joseph Gresock
>Assignee: Matt Gilman
>Priority: Major
>  Labels: release-notes
> Fix For: 1.5.0
>
>
> This is the strack trace I receive, though I'm not sure it affects anything, 
> since the cluster is eventually able to connect.
> 2017-06-20 13:46:44,680 WARN [Reconnect ip-172-31-55-36.ec2.internal:8443] 
> o.a.n.c.c.node.NodeClusterCoordinator Problem encountered issuing 
> reconnection request to node ip-172-31-55-36.ec2.internal:8443
> java.io.IOException: 
> org.apache.nifi.controller.serialization.FlowSerializationException: 
> java.lang.ClassCastException: 
> org.apache.nifi.web.api.dto.TemplateDTO$JaxbAccessorM_getDescription_setDescription_java_lang_String
>  cannot be cast to com.sun.xml.internal.bind.v2.runtime.reflect.Accessor
> at 
> org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.save(StandardXMLFlowConfigurationDAO.java:143)
> at 
> org.apache.nifi.controller.StandardFlowService.createDataFlowFromController(StandardFlowService.java:607)
> at 
> org.apache.nifi.controller.StandardFlowService.createDataFlowFromController(StandardFlowService.java:100)
> at 
> org.apache.nifi.cluster.coordination.node.NodeClusterCoordinator$2.run(NodeClusterCoordinator.java:706)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.nifi.controller.serialization.FlowSerializationException: 
> java.lang.ClassCastException: 
> org.apache.nifi.web.api.dto.TemplateDTO$JaxbAccessorM_getDescription_setDescription_java_lang_String
>  cannot be cast to com.sun.xml.internal.bind.v2.runtime.reflect.Accessor
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addTemplate(StandardFlowSerializer.java:546)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addProcessGroup(StandardFlowSerializer.java:203)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addProcessGroup(StandardFlowSerializer.java:187)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.addProcessGroup(StandardFlowSerializer.java:187)
> at 
> org.apache.nifi.controller.serialization.StandardFlowSerializer.serialize(StandardFlowSerializer.java:97)
> at 
> org.apache.nifi.controller.FlowController.serialize(FlowController.java:1544)
> at 
> org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.save(StandardXMLFlowConfigurationDAO.java:141)
> ... 4 common frames omitted
> Caused by: java.lang.ClassCastException: 
> org.apache.nifi.web.api.dto.TemplateDTO$JaxbAccessorM_getDescription_setDescription_java_lang_String
>  cannot be cast to com.sun.xml.internal.bind.v2.runtime.reflect.Accessor
> at 
> com.sun.xml.internal.bind.v2.runtime.reflect.opt.OptimizedAccessorFactory.instanciate(OptimizedAccessorFactory.java:190)
> at 
> com.sun.xml.internal.bind.v2.runtime.reflect.opt.OptimizedAccessorFactory.get(OptimizedAccessorFactory.java:129)
> at 
> com.sun.xml.internal.bind.v2.runtime.reflect.Accessor$GetterSetterReflection.optimize(Accessor.java:388)
> at 
> com.sun.xml.internal.bind.v2.runtime.property.SingleElementLeafProperty.(SingleElementLeafProperty.java:77)
> at sun.reflect.GeneratedConstructorAccessor435.newInstance(Unknown 
> Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> com.sun.xml.internal.bind.v2.runtime.property.PropertyFactory.create(PropertyFactory.java:113)
> at 
> com.sun.xml.internal.bind.v2.runtime.ClassBeanInfoImpl.(ClassBeanInfoImpl.java:166)
> at 
> com.sun.xml.internal.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:488)
> at 
> com.sun.xml.internal.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:305)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2365: NIFI-4092: Removing direct dependency on jaxb

2018-01-25 Thread ravipapisetti
Github user ravipapisetti commented on the issue:

https://github.com/apache/nifi/pull/2365
  
Hi, We are hitting the same issue with NiFi 1.3. Any workaround how this 
could be solved. Interesting to know this happens only in one of the node in 
the cluster, even though all nodes are identical. Appreciate any help.



---


[jira] [Commented] (NIFI-4786) Allow Expression Evaluation to Kinesis/Firehose Stream Name

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340149#comment-16340149
 ] 

ASF GitHub Bot commented on NIFI-4786:
--

Github user SunSatION commented on the issue:

https://github.com/apache/nifi/pull/2409
  
Hi @jvwing 

The problem is caused by an unsupported version of Jackson databind defined 
in the parent NIFI POM (2.9.3) file compared to the used by AWS Java SDK (2.6.6)

I've included the old version with scope set to `test` in the 
`nifi-aws-bundle` POM file, however I'm not sure whether this is the right 
approach. Any suggestions?

The tests were executed successfully. 

[Test Results - 
ITPutKinesisStream.log](https://github.com/apache/nifi/files/1665817/Test.Results.-.ITPutKinesisStream.log)





> Allow Expression Evaluation to Kinesis/Firehose Stream Name
> ---
>
> Key: NIFI-4786
> URL: https://issues.apache.org/jira/browse/NIFI-4786
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 1.5.0
>Reporter: Dorian Bugeja
>Priority: Minor
>  Labels: features, performance, pull-request-available
> Attachments: 
> NIFI_4786___Allow_Expression_Evaluation_to_Kinesis_Firehose_Stream_Name_NIFI_4786___Allow_1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currenctly, the Stream name propoerty for both Firehose and Kinesis does not 
> support the expression language. Routing can be performed based on an 
> attribute of the flowfile and having a single component rather than multiple 
> for each one. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2409: NIFI-4786 - Allow Expression Evaluation to Kinesis/Firehos...

2018-01-25 Thread SunSatION
Github user SunSatION commented on the issue:

https://github.com/apache/nifi/pull/2409
  
Hi @jvwing 

The problem is caused by an unsupported version of Jackson databind defined 
in the parent NIFI POM (2.9.3) file compared to the used by AWS Java SDK (2.6.6)

I've included the old version with scope set to `test` in the 
`nifi-aws-bundle` POM file, however I'm not sure whether this is the right 
approach. Any suggestions?

The tests were executed successfully. 

[Test Results - 
ITPutKinesisStream.log](https://github.com/apache/nifi/files/1665817/Test.Results.-.ITPutKinesisStream.log)





---


[jira] [Commented] (NIFI-4814) Add distinctive attribute to S2S reporting tasks

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340112#comment-16340112
 ] 

ASF GitHub Bot commented on NIFI-4814:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
I've written [NIFI-4816](https://issues.apache.org/jira/browse/NIFI-4816) 
to cover the name issue.


> Add distinctive attribute to S2S reporting tasks
> 
>
> Key: NIFI-4814
> URL: https://issues.apache.org/jira/browse/NIFI-4814
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> I'm currently using multiple S2S reporting tasks to send various monitoring 
> data about my workflows. However this forces me to use multiple input ports 
> (one per type of reporting task) as I'm not able to easily distinct what data 
> comes from what reporting task.
> I'd like to add an attribute "reporting.task.name" set to the name of the 
> reporting task when sending flow files via S2S. This way I can use a single 
> input port and then use a RouteOnAttribute processor to split my data based 
> on the reporting task source. The objective to use a single input port is to 
> reduce the number of threads used for S2S operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2431: NIFI-4814 - Add distinctive attribute to S2S reporting tas...

2018-01-25 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
I've written [NIFI-4816](https://issues.apache.org/jira/browse/NIFI-4816) 
to cover the name issue.


---


[jira] [Created] (NIFI-4816) Changes to ReportingTask name are not available to the ReportingTask

2018-01-25 Thread Matt Burgess (JIRA)
Matt Burgess created NIFI-4816:
--

 Summary: Changes to ReportingTask name are not available to the 
ReportingTask
 Key: NIFI-4816
 URL: https://issues.apache.org/jira/browse/NIFI-4816
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework, Extensions
Reporter: Matt Burgess


The Reporting Task name is only set on the ReportingTask itself during 
initialize(), which is only called the first time the ReportingTask is 
instantiated. This means if you change the name of the ReportingTask and 
restart it, The ReportingTask has its original name and the current name is 
inaccessible via the ConfigurationContext it is passed later. If you restart 
NiFi, the new name is set and stays that way.

Rather than calling initialize() more than once, it is proposed to make the 
current name (and any other appropriate properties) available perhaps via 
ConfigurationContext which is passed to methods annotated with OnScheduled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-385) RPG destruction can lead to EOFException in NiFi when sockets are not closed.

2018-01-25 Thread marco polo (JIRA)
marco polo created MINIFICPP-385:


 Summary: RPG destruction can lead to EOFException in NiFi when 
sockets are not closed. 
 Key: MINIFICPP-385
 URL: https://issues.apache.org/jira/browse/MINIFICPP-385
 Project: NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: marco polo
Assignee: marco polo


Current solution has not caused the issue in hours:

 

Setup a countdown latch using RAII to control closure while there are any open 
sockets in the ontrigger function in RPG. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4814) Add distinctive attribute to S2S reporting tasks

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339851#comment-16339851
 ] 

ASF GitHub Bot commented on NIFI-4814:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
Good catch, I didn't try that. Probably best to fix the issue before, 
otherwise it could be confusing if users change the reporting task name after 
they created it. Do you want to raise a JIRA or should I do it?


> Add distinctive attribute to S2S reporting tasks
> 
>
> Key: NIFI-4814
> URL: https://issues.apache.org/jira/browse/NIFI-4814
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> I'm currently using multiple S2S reporting tasks to send various monitoring 
> data about my workflows. However this forces me to use multiple input ports 
> (one per type of reporting task) as I'm not able to easily distinct what data 
> comes from what reporting task.
> I'd like to add an attribute "reporting.task.name" set to the name of the 
> reporting task when sending flow files via S2S. This way I can use a single 
> input port and then use a RouteOnAttribute processor to split my data based 
> on the reporting task source. The objective to use a single input port is to 
> reduce the number of threads used for S2S operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2431: NIFI-4814 - Add distinctive attribute to S2S reporting tas...

2018-01-25 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
Good catch, I didn't try that. Probably best to fix the issue before, 
otherwise it could be confusing if users change the reporting task name after 
they created it. Do you want to raise a JIRA or should I do it?


---


[jira] [Commented] (NIFI-4814) Add distinctive attribute to S2S reporting tasks

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339849#comment-16339849
 ] 

ASF GitHub Bot commented on NIFI-4814:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
Ugh, bad news. The Reporting Task name, although available in the UI, is 
only set on the ReportingTask itself during initialize(), which is only called 
the first time the ReportingTask is instantiated. This means if you change the 
name of the ReportingTask and restart it, it is the original name that is 
reported in your name attribute above. However if you restart NiFi it will pick 
up the new name.  That's not your fault, it's a bug; but I wonder if we should 
bring this in before or after any fix for that part?


> Add distinctive attribute to S2S reporting tasks
> 
>
> Key: NIFI-4814
> URL: https://issues.apache.org/jira/browse/NIFI-4814
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> I'm currently using multiple S2S reporting tasks to send various monitoring 
> data about my workflows. However this forces me to use multiple input ports 
> (one per type of reporting task) as I'm not able to easily distinct what data 
> comes from what reporting task.
> I'd like to add an attribute "reporting.task.name" set to the name of the 
> reporting task when sending flow files via S2S. This way I can use a single 
> input port and then use a RouteOnAttribute processor to split my data based 
> on the reporting task source. The objective to use a single input port is to 
> reduce the number of threads used for S2S operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2431: NIFI-4814 - Add distinctive attribute to S2S reporting tas...

2018-01-25 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
Ugh, bad news. The Reporting Task name, although available in the UI, is 
only set on the ReportingTask itself during initialize(), which is only called 
the first time the ReportingTask is instantiated. This means if you change the 
name of the ReportingTask and restart it, it is the original name that is 
reported in your name attribute above. However if you restart NiFi it will pick 
up the new name.  That's not your fault, it's a bug; but I wonder if we should 
bring this in before or after any fix for that part?


---


[GitHub] nifi-minifi-cpp pull request #251: MINIFICPP-384 Ignore flex/bison generated...

2018-01-25 Thread achristianson
GitHub user achristianson opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/251

MINIFICPP-384 Ignore flex/bison generated files in CPack source package

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [x] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [x] If applicable, have you updated the LICENSE file?
- [x] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-384

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/251.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #251


commit 696ea2b2133c3204c9872d7c88a37a6f88a3c2ed
Author: Andrew I. Christianson 
Date:   2018-01-25T20:30:47Z

MINIFICPP-384 Ignore flex/bison generated files in CPack source package




---


[jira] [Commented] (MINIFICPP-384) CPack needs to ignore flex/bison generated files

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MINIFICPP-384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339836#comment-16339836
 ] 

ASF GitHub Bot commented on MINIFICPP-384:
--

GitHub user achristianson opened a pull request:

https://github.com/apache/nifi-minifi-cpp/pull/251

MINIFICPP-384 Ignore flex/bison generated files in CPack source package

Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced
 in the commit message?

- [x] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [x] If applicable, have you updated the LICENSE file?
- [x] If applicable, have you updated the NOTICE file?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-384

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi-minifi-cpp/pull/251.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #251


commit 696ea2b2133c3204c9872d7c88a37a6f88a3c2ed
Author: Andrew I. Christianson 
Date:   2018-01-25T20:30:47Z

MINIFICPP-384 Ignore flex/bison generated files in CPack source package




> CPack needs to ignore flex/bison generated files
> 
>
> Key: MINIFICPP-384
> URL: https://issues.apache.org/jira/browse/MINIFICPP-384
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
> Fix For: 0.4.0
>
>
> Lexer/parser generated code is included in src packages. These files need to 
> be excluded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-384) CPack needs to ignore flex/bison generated files

2018-01-25 Thread Andrew Christianson (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Christianson updated MINIFICPP-384:
--
Fix Version/s: 0.4.0

> CPack needs to ignore flex/bison generated files
> 
>
> Key: MINIFICPP-384
> URL: https://issues.apache.org/jira/browse/MINIFICPP-384
> Project: NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Andrew Christianson
>Assignee: Andrew Christianson
>Priority: Major
> Fix For: 0.4.0
>
>
> Lexer/parser generated code is included in src packages. These files need to 
> be excluded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4700) PostHTTP: close client

2018-01-25 Thread Michael Hogue (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Hogue updated NIFI-4700:

Description: 
In PostHTTP, the CloseableHttpClient never actually appears to be closed...

Additionally, we could leverage CloseableHttpResponse to close responses.

  was:In PostHTTP, the CloseableHttpClient never actaully appears to be 
closed...


> PostHTTP: close client
> --
>
> Key: NIFI-4700
> URL: https://issues.apache.org/jira/browse/NIFI-4700
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Major
> Fix For: 1.6.0
>
>
> In PostHTTP, the CloseableHttpClient never actually appears to be closed...
> Additionally, we could leverage CloseableHttpResponse to close responses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-384) CPack needs to ignore flex/bison generated files

2018-01-25 Thread Andrew Christianson (JIRA)
Andrew Christianson created MINIFICPP-384:
-

 Summary: CPack needs to ignore flex/bison generated files
 Key: MINIFICPP-384
 URL: https://issues.apache.org/jira/browse/MINIFICPP-384
 Project: NiFi MiNiFi C++
  Issue Type: Bug
Reporter: Andrew Christianson
Assignee: Andrew Christianson


Lexer/parser generated code is included in src packages. These files need to be 
excluded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4700) PostHTTP: close client

2018-01-25 Thread Michael Hogue (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Hogue updated NIFI-4700:

Fix Version/s: 1.6.0
   Status: Patch Available  (was: In Progress)

> PostHTTP: close client
> --
>
> Key: NIFI-4700
> URL: https://issues.apache.org/jira/browse/NIFI-4700
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Major
> Fix For: 1.6.0
>
>
> In PostHTTP, the CloseableHttpClient never actaully appears to be closed...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4700) PostHTTP: close client

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339728#comment-16339728
 ] 

ASF GitHub Bot commented on NIFI-4700:
--

GitHub user m-hogue opened a pull request:

https://github.com/apache/nifi/pull/2434

NIFI-4700: Moved all PostHTTP http clients, http responses to 
try-with-resources

… to make sure they're closed

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/m-hogue/nifi NIFI-4700

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2434.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2434


commit 44799ba670d06bbe1403373ce3fee6b8646a1a83
Author: m-hogue 
Date:   2018-01-25T19:51:00Z

NIFI-4700: Moved all http clients, http responses to try-with-resources to 
make sure they're closed




> PostHTTP: close client
> --
>
> Key: NIFI-4700
> URL: https://issues.apache.org/jira/browse/NIFI-4700
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Major
>
> In PostHTTP, the CloseableHttpClient never actaully appears to be closed...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2434: NIFI-4700: Moved all PostHTTP http clients, http re...

2018-01-25 Thread m-hogue
GitHub user m-hogue opened a pull request:

https://github.com/apache/nifi/pull/2434

NIFI-4700: Moved all PostHTTP http clients, http responses to 
try-with-resources

… to make sure they're closed

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/m-hogue/nifi NIFI-4700

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2434.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2434


commit 44799ba670d06bbe1403373ce3fee6b8646a1a83
Author: m-hogue 
Date:   2018-01-25T19:51:00Z

NIFI-4700: Moved all http clients, http responses to try-with-resources to 
make sure they're closed




---


[jira] [Assigned] (NIFI-4700) PostHTTP: close client

2018-01-25 Thread Michael Hogue (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Hogue reassigned NIFI-4700:
---

Assignee: Michael Hogue

> PostHTTP: close client
> --
>
> Key: NIFI-4700
> URL: https://issues.apache.org/jira/browse/NIFI-4700
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Brandon DeVries
>Assignee: Michael Hogue
>Priority: Major
>
> In PostHTTP, the CloseableHttpClient never actaully appears to be closed...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4289) Implement put processor for InfluxDB

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339707#comment-16339707
 ] 

ASF GitHub Bot commented on NIFI-4289:
--

Github user mans2singh commented on the issue:

https://github.com/apache/nifi/pull/2101
  
@MikeThomsen - Thanks for your feedback.  Please let me know if I can 
assist in anyway.  Mans


> Implement put processor for InfluxDB
> 
>
> Key: NIFI-4289
> URL: https://issues.apache.org/jira/browse/NIFI-4289
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.3.0
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: insert, measurements,, put, timeseries
>
> Support inserting time series measurements into InfluxDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2101: NIFI-4289 - InfluxDB put processor

2018-01-25 Thread mans2singh
Github user mans2singh commented on the issue:

https://github.com/apache/nifi/pull/2101
  
@MikeThomsen - Thanks for your feedback.  Please let me know if I can 
assist in anyway.  Mans


---


[jira] [Commented] (NIFI-4814) Add distinctive attribute to S2S reporting tasks

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339657#comment-16339657
 ] 

ASF GitHub Bot commented on NIFI-4814:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
Yep, there could be some situations where it would be useful. Added! Thanks!


> Add distinctive attribute to S2S reporting tasks
> 
>
> Key: NIFI-4814
> URL: https://issues.apache.org/jira/browse/NIFI-4814
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> I'm currently using multiple S2S reporting tasks to send various monitoring 
> data about my workflows. However this forces me to use multiple input ports 
> (one per type of reporting task) as I'm not able to easily distinct what data 
> comes from what reporting task.
> I'd like to add an attribute "reporting.task.name" set to the name of the 
> reporting task when sending flow files via S2S. This way I can use a single 
> input port and then use a RouteOnAttribute processor to split my data based 
> on the reporting task source. The objective to use a single input port is to 
> reduce the number of threads used for S2S operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2431: NIFI-4814 - Add distinctive attribute to S2S reporting tas...

2018-01-25 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
Yep, there could be some situations where it would be useful. Added! Thanks!


---


[jira] [Commented] (NIFI-4814) Add distinctive attribute to S2S reporting tasks

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339588#comment-16339588
 ] 

ASF GitHub Bot commented on NIFI-4814:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
Also, not required but it might be a good idea to add the type as well. 
Either way, let me know and I'll merge


> Add distinctive attribute to S2S reporting tasks
> 
>
> Key: NIFI-4814
> URL: https://issues.apache.org/jira/browse/NIFI-4814
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> I'm currently using multiple S2S reporting tasks to send various monitoring 
> data about my workflows. However this forces me to use multiple input ports 
> (one per type of reporting task) as I'm not able to easily distinct what data 
> comes from what reporting task.
> I'd like to add an attribute "reporting.task.name" set to the name of the 
> reporting task when sending flow files via S2S. This way I can use a single 
> input port and then use a RouteOnAttribute processor to split my data based 
> on the reporting task source. The objective to use a single input port is to 
> reduce the number of threads used for S2S operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2431: NIFI-4814 - Add distinctive attribute to S2S reporting tas...

2018-01-25 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
Also, not required but it might be a good idea to add the type as well. 
Either way, let me know and I'll merge


---


[jira] [Updated] (NIFIREG-129) swagger spec missing snapshot_metadata param for bucket_flows_api

2018-01-25 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFIREG-129:

Fix Version/s: 0.2.0

> swagger spec missing snapshot_metadata param for bucket_flows_api
> -
>
> Key: NIFIREG-129
> URL: https://issues.apache.org/jira/browse/NIFIREG-129
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 0.2.0
>
>
> When calling methods to retrieve versioned flows from the bucket_flows_api ( 
> get_flow, get_flow_version, get_latest_flow_version, etc) the spec calls for 
> a bucket_id and a flow_id, however the endpoint returns the error "Invalid 
> value for `snapshot_metadata`, must not be `None`"
> Presumably there is a parameter missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFIREG-129) swagger spec missing snapshot_metadata param for bucket_flows_api

2018-01-25 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran updated NIFIREG-129:

Affects Version/s: (was: 0.2.0)

> swagger spec missing snapshot_metadata param for bucket_flows_api
> -
>
> Key: NIFIREG-129
> URL: https://issues.apache.org/jira/browse/NIFIREG-129
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 0.2.0
>
>
> When calling methods to retrieve versioned flows from the bucket_flows_api ( 
> get_flow, get_flow_version, get_latest_flow_version, etc) the spec calls for 
> a bucket_id and a flow_id, however the endpoint returns the error "Invalid 
> value for `snapshot_metadata`, must not be `None`"
> Presumably there is a parameter missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-129) swagger spec missing snapshot_metadata param for bucket_flows_api

2018-01-25 Thread Kevin Doran (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339531#comment-16339531
 ] 

Kevin Doran commented on NIFIREG-129:
-

I found the root cause and added a commit with the fix to this PR: 
https://github.com/apache/nifi-registry/pull/95

> swagger spec missing snapshot_metadata param for bucket_flows_api
> -
>
> Key: NIFIREG-129
> URL: https://issues.apache.org/jira/browse/NIFIREG-129
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0, 0.2.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
>
> When calling methods to retrieve versioned flows from the bucket_flows_api ( 
> get_flow, get_flow_version, get_latest_flow_version, etc) the spec calls for 
> a bucket_id and a flow_id, however the endpoint returns the error "Invalid 
> value for `snapshot_metadata`, must not be `None`"
> Presumably there is a parameter missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4814) Add distinctive attribute to S2S reporting tasks

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339493#comment-16339493
 ] 

ASF GitHub Bot commented on NIFI-4814:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
Yes you're right, I'll add the UUID as well!


> Add distinctive attribute to S2S reporting tasks
> 
>
> Key: NIFI-4814
> URL: https://issues.apache.org/jira/browse/NIFI-4814
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> I'm currently using multiple S2S reporting tasks to send various monitoring 
> data about my workflows. However this forces me to use multiple input ports 
> (one per type of reporting task) as I'm not able to easily distinct what data 
> comes from what reporting task.
> I'd like to add an attribute "reporting.task.name" set to the name of the 
> reporting task when sending flow files via S2S. This way I can use a single 
> input port and then use a RouteOnAttribute processor to split my data based 
> on the reporting task source. The objective to use a single input port is to 
> reduce the number of threads used for S2S operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2431: NIFI-4814 - Add distinctive attribute to S2S reporting tas...

2018-01-25 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
Yes you're right, I'll add the UUID as well!


---


[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339488#comment-16339488
 ] 

ASF GitHub Bot commented on NIFI-4809:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2430
  
That's a great idea! I'll have a look.


> Implement a SiteToSiteMetricsReportingTask
> --
>
> Key: NIFI-4809
> URL: https://issues.apache.org/jira/browse/NIFI-4809
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> At the moment there is an AmbariReportingTask to send the NiFi-related 
> metrics of the host to the Ambari Metrics Service. In a multi-cluster 
> configuration, or when working with MiNiFi (Java) agents, it might not be 
> possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS 
> REST API.
> To solve this problem, a solution would be to implement a 
> SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi 
> instance/cluster that will be able to publish the metrics into AMS (using 
> InvokeHTTP). This way, it is possible to have the metrics of all the 
> instances exposed in one AMS instance.
> I propose to send the data formatted as we are doing right now in the Ambari 
> reporting task. If needed, it can be easily converted into another schema 
> using the record processors once received via S2S.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2180: Added GetMongoAggregation to support running Mongo ...

2018-01-25 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2180#discussion_r163883982
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/AbstractMongoProcessor.java
 ---
@@ -95,15 +102,54 @@
 
 public static final PropertyDescriptor WRITE_CONCERN = new 
PropertyDescriptor.Builder()
 .name("Write Concern")
+.displayName("Write Concern")
 .description("The write concern to use")
 .required(true)
 .allowableValues(WRITE_CONCERN_ACKNOWLEDGED, 
WRITE_CONCERN_UNACKNOWLEDGED, WRITE_CONCERN_FSYNCED, WRITE_CONCERN_JOURNALED,
 WRITE_CONCERN_REPLICA_ACKNOWLEDGED, 
WRITE_CONCERN_MAJORITY)
 .defaultValue(WRITE_CONCERN_ACKNOWLEDGED)
 .build();
 
+static final PropertyDescriptor RESULTS_PER_FLOWFILE = new 
PropertyDescriptor.Builder()
+.name("results-per-flowfile")
+.displayName("Results Per FlowFile")
+.description("How many results to put into a flowfile at once. 
The whole body will be treated as a JSON array of results.")
+.required(false)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.defaultValue("1")
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Batch Size")
+.displayName("Batch Size")
+.description("The number of elements returned from the server 
in one batch")
+.required(false)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.defaultValue("1")
+.build();
+
+static final PropertyDescriptor QUERY_ATTRIBUTE = new 
PropertyDescriptor.Builder()
+.name("mongo-agg-query-attribute")
+.displayName("Query Output Attribute")
+.description("If set, the query will be written to a specified 
attribute on the output flowfiles.")
+.expressionLanguageSupported(true)
+.addValidator(Validator.VALID)
--- End diff --

Since this has to be a valid attribute name, I recommend changing this to 
`StandardValidators.ATTRIBUTE_KEY_PROPERTY_NAME_VALIDATOR`

Also, since you may be writing this attribute in GetMongo and 
RunMongoAggregation, there should be a WritesAttribute annotation on both 
processors (same goes for mime.type or any other attribute that is written)


---


[GitHub] nifi pull request #2180: Added GetMongoAggregation to support running Mongo ...

2018-01-25 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2180#discussion_r163887649
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/test/java/org/apache/nifi/processors/mongodb/GetMongoTest.java
 ---
@@ -135,15 +135,15 @@ public void testValidators() {
 // invalid projection
 runner.setVariable("projection", "{a: x,y,z}");
 runner.setProperty(GetMongo.QUERY, "{a: 1}");
-runner.setProperty(GetMongo.PROJECTION, "${projection}");
+runner.setProperty(GetMongo.PROJECTION, "{a: z}");
--- End diff --

I can't immediately tell the reason for this change, since the projection 
is changed from x,y,z to z. Is this on purpose?


---


[GitHub] nifi pull request #2180: Added GetMongoAggregation to support running Mongo ...

2018-01-25 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2180#discussion_r163883832
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/AbstractMongoProcessor.java
 ---
@@ -95,15 +102,54 @@
 
 public static final PropertyDescriptor WRITE_CONCERN = new 
PropertyDescriptor.Builder()
 .name("Write Concern")
+.displayName("Write Concern")
 .description("The write concern to use")
 .required(true)
 .allowableValues(WRITE_CONCERN_ACKNOWLEDGED, 
WRITE_CONCERN_UNACKNOWLEDGED, WRITE_CONCERN_FSYNCED, WRITE_CONCERN_JOURNALED,
 WRITE_CONCERN_REPLICA_ACKNOWLEDGED, 
WRITE_CONCERN_MAJORITY)
 .defaultValue(WRITE_CONCERN_ACKNOWLEDGED)
 .build();
 
+static final PropertyDescriptor RESULTS_PER_FLOWFILE = new 
PropertyDescriptor.Builder()
+.name("results-per-flowfile")
+.displayName("Results Per FlowFile")
+.description("How many results to put into a flowfile at once. 
The whole body will be treated as a JSON array of results.")
+.required(false)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.defaultValue("1")
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Batch Size")
+.displayName("Batch Size")
+.description("The number of elements returned from the server 
in one batch")
+.required(false)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
--- End diff --

Should this allow 0 to fetch all batches? Or maybe since it is not 
required, the description can say that if this property is not specified, then 
all elements will be returned.


---


[GitHub] nifi pull request #2180: Added GetMongoAggregation to support running Mongo ...

2018-01-25 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2180#discussion_r163884629
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/AbstractMongoProcessor.java
 ---
@@ -95,15 +100,36 @@
 
 public static final PropertyDescriptor WRITE_CONCERN = new 
PropertyDescriptor.Builder()
 .name("Write Concern")
+.displayName("Write Concern")
 .description("The write concern to use")
 .required(true)
 .allowableValues(WRITE_CONCERN_ACKNOWLEDGED, 
WRITE_CONCERN_UNACKNOWLEDGED, WRITE_CONCERN_FSYNCED, WRITE_CONCERN_JOURNALED,
 WRITE_CONCERN_REPLICA_ACKNOWLEDGED, 
WRITE_CONCERN_MAJORITY)
 .defaultValue(WRITE_CONCERN_ACKNOWLEDGED)
 .build();
 
+static final PropertyDescriptor RESULTS_PER_FLOWFILE = new 
PropertyDescriptor.Builder()
+.name("results-per-flowfile")
+.displayName("Results Per FlowFile")
+.description("How many results to put into a flowfile at once. 
The whole body will be treated as a JSON array of results.")
+.required(false)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.defaultValue("1")
+.build();
+
+static final PropertyDescriptor BATCH_SIZE = new 
PropertyDescriptor.Builder()
+.name("Batch Size")
+.displayName("Batch Size")
+.description("The number of elements returned from the server 
in one batch")
+.required(false)
+.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR)
+.defaultValue("1")
+.build();
+
 static List descriptors = new ArrayList<>();
 
+static final Relationship REL_SUCCESS = new 
Relationship.Builder().name("success").description("All files are routed to 
success").build();
--- End diff --

What do you think about the above comment? The description is a bit 
confusing in the context of RunMongoAggregation.


---


[GitHub] nifi pull request #2180: Added GetMongoAggregation to support running Mongo ...

2018-01-25 Thread mattyb149
Github user mattyb149 commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2180#discussion_r163884942
  
--- Diff: 
nifi-nar-bundles/nifi-mongodb-bundle/nifi-mongodb-processors/src/main/java/org/apache/nifi/processors/mongodb/AbstractMongoProcessor.java
 ---
@@ -221,4 +267,16 @@ protected WriteConcern getWriteConcern(final 
ProcessContext context) {
 }
 return writeConcern;
 }
+
+protected void writeBatch(String payload, FlowFile parent, 
ProcessContext context, ProcessSession session, Map extraAttributes, 
Relationship rel) throws UnsupportedEncodingException {
+String charset = parent != null ? 
context.getProperty(CHARSET).evaluateAttributeExpressions(parent).getValue()
+: 
context.getProperty(CHARSET).evaluateAttributeExpressions().getValue();
+
+FlowFile flowFile = session.create(parent);
+flowFile = session.importFrom(new 
ByteArrayInputStream(payload.getBytes(charset)), flowFile);
+flowFile = session.putAttribute(flowFile, 
CoreAttributes.MIME_TYPE.key(), "application/json");
--- End diff --

THANK YOU for including this :) so many processors that output a specific 
format (even converters!) do not set the mime.type attribute, which makes 
viewing them from the UI a pain.


---


[GitHub] nifi issue #2431: NIFI-4814 - Add distinctive attribute to S2S reporting tas...

2018-01-25 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2431
  
Should we add the ID as well? The name may not be unique, which although 
that can be used strategically to "group" things with RouteOnAttribute, it's 
not exactly "distinct". The onus could be on the user to uniquely name their 
reporting tasks, but if they had access to both attributes, they could do 
different levels of grouping, filtering, etc.


---


[jira] [Commented] (NIFI-4809) Implement a SiteToSiteMetricsReportingTask

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339402#comment-16339402
 ] 

ASF GitHub Bot commented on NIFI-4809:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2430
  
Reusing the Ambari format certainly makes this one easier to implement, and 
as you said the user can convert later with a record processor, but I'm 
thinking it might be best to be able to specify the RecordWriter in the 
reporting task. That way the conversion, filtering, etc. can be done while the 
records are already in object format (i.e. parsed), which would save the step 
of re-parsing and re-writing. Using a JsonRecordSetWriter with "Inherit Record 
Schema" would result in the same behavior as you propose here, or you could 
provide a different format and/or filter out the fields, etc. by using a 
configured RecordSetWriter.

@markap14 suggested to encode the "input" schema as a file in 
src/main/resources and read it in when the class/instance is created, using the 
available util method to convert it to a RecordSchema. To be honest I would 
like to have the S2S Provenance reporting task do this as well, but that's a 
bit more invasive since it already exists. Since the one in this PR is new, 
would be nice to have it be the exemplar for creating reporting services in the 
future. What do you think?


> Implement a SiteToSiteMetricsReportingTask
> --
>
> Key: NIFI-4809
> URL: https://issues.apache.org/jira/browse/NIFI-4809
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> At the moment there is an AmbariReportingTask to send the NiFi-related 
> metrics of the host to the Ambari Metrics Service. In a multi-cluster 
> configuration, or when working with MiNiFi (Java) agents, it might not be 
> possible for all the NiFi instances (NiFi and/or MiNiFi) to access the AMS 
> REST API.
> To solve this problem, a solution would be to implement a 
> SiteToSiteMetricsReportingTask to send the data via S2S to the "main" NiFi 
> instance/cluster that will be able to publish the metrics into AMS (using 
> InvokeHTTP). This way, it is possible to have the metrics of all the 
> instances exposed in one AMS instance.
> I propose to send the data formatted as we are doing right now in the Ambari 
> reporting task. If needed, it can be easily converted into another schema 
> using the record processors once received via S2S.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2430: NIFI-4809 - Implement a SiteToSiteMetricsReportingTask

2018-01-25 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2430
  
Reusing the Ambari format certainly makes this one easier to implement, and 
as you said the user can convert later with a record processor, but I'm 
thinking it might be best to be able to specify the RecordWriter in the 
reporting task. That way the conversion, filtering, etc. can be done while the 
records are already in object format (i.e. parsed), which would save the step 
of re-parsing and re-writing. Using a JsonRecordSetWriter with "Inherit Record 
Schema" would result in the same behavior as you propose here, or you could 
provide a different format and/or filter out the fields, etc. by using a 
configured RecordSetWriter.

@markap14 suggested to encode the "input" schema as a file in 
src/main/resources and read it in when the class/instance is created, using the 
available util method to convert it to a RecordSchema. To be honest I would 
like to have the S2S Provenance reporting task do this as well, but that's a 
bit more invasive since it already exists. Since the one in this PR is new, 
would be nice to have it be the exemplar for creating reporting services in the 
future. What do you think?


---


[jira] [Commented] (NIFI-978) Support parameterized prepared statements in ExecuteSQL

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339380#comment-16339380
 ] 

ASF GitHub Bot commented on NIFI-978:
-

GitHub user mattyb149 opened a pull request:

https://github.com/apache/nifi/pull/2433

NIFI-978: Support parameterized statements in ExecuteSQL

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mattyb149/nifi NIFI-978

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2433.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2433


commit 7052824d88b3b9ceb04d7dda92cb63264ccf2a65
Author: Matthew Burgess 
Date:   2018-01-25T15:24:17Z

NIFI-978: Support parameterized statements in ExecuteSQL




> Support parameterized prepared statements in ExecuteSQL
> ---
>
> Key: NIFI-978
> URL: https://issues.apache.org/jira/browse/NIFI-978
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Daryl Teo
>Assignee: Matt Burgess
>Priority: Minor
>
> PutSQL and ExecuteSQL are highly inconsistent and leads to confusion.
> - PutSQL relies on FlowFile content to execute it's statement.
> - ExecuteSQL relies on SQL Select Command attribute
> - PutSQL supports parameterized statements through sql.args attributes
> - ExecuteSQL relies on Expression Language to insert dynamic properties
> The reliance on expression language for ExecuteSQL may also lead to potential 
> SQL injection if one is not careful as it is a string replacement.
> Therefore in the interest of reliability and consistency I highly recommend 
> that the SQL processors be standardised.
> Note: I prefer the sql command attribute for running SQL as opposed to the 
> (lower visibility) content based command specification. Having the query 
> attribute of ExecuteSQL, with the sql.args attributes of PutSQL would be a 
> great improvement. If you support this, I will create a new issue in Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4815) Add EL support to ExecuteProcess

2018-01-25 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-4815:
-
Status: Patch Available  (was: Open)

> Add EL support to ExecuteProcess
> 
>
> Key: NIFI-4815
> URL: https://issues.apache.org/jira/browse/NIFI-4815
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> ExecuteProcess does not support EL for 'command' and 'working dir' 
> properties. That would be useful when promoting workflows between 
> environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-383) No matter the start directory for minifi.sh, repositories are created in the current working directory

2018-01-25 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-383:
-
Fix Version/s: 0.5.0

> No matter the start directory for minifi.sh, repositories are created in the 
> current working directory
> --
>
> Key: MINIFICPP-383
> URL: https://issues.apache.org/jira/browse/MINIFICPP-383
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Affects Versions: 0.1.0, 0.2.0, 0.3.0
>Reporter: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>
> No matter where the user starts MiNiFi from the directory in which the repos 
> are created is the current working directory. This should be changed to use a 
> persistent directory based on the root of the installation. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-383) No matter the start directory for minifi.sh, repositories are created in the current working directory

2018-01-25 Thread marco polo (JIRA)
marco polo created MINIFICPP-383:


 Summary: No matter the start directory for minifi.sh, repositories 
are created in the current working directory
 Key: MINIFICPP-383
 URL: https://issues.apache.org/jira/browse/MINIFICPP-383
 Project: NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: marco polo


No matter where the user starts MiNiFi from the directory in which the repos 
are created is the current working directory. This should be changed to use a 
persistent directory based on the root of the installation. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4815) Add EL support to ExecuteProcess

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339378#comment-16339378
 ] 

ASF GitHub Bot commented on NIFI-4815:
--

GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2432

NIFI-4815 - Add EL support to ExecuteProcess

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-4815

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2432.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2432


commit 6022373914ee05ca7b97709fcd51f9b233541958
Author: Pierre Villard 
Date:   2018-01-25T15:29:10Z

NIFI-4815 - Add EL support to ExecuteProcess




> Add EL support to ExecuteProcess
> 
>
> Key: NIFI-4815
> URL: https://issues.apache.org/jira/browse/NIFI-4815
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>Priority: Major
>
> ExecuteProcess does not support EL for 'command' and 'working dir' 
> properties. That would be useful when promoting workflows between 
> environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2432: NIFI-4815 - Add EL support to ExecuteProcess

2018-01-25 Thread pvillard31
GitHub user pvillard31 opened a pull request:

https://github.com/apache/nifi/pull/2432

NIFI-4815 - Add EL support to ExecuteProcess

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pvillard31/nifi NIFI-4815

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2432.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2432


commit 6022373914ee05ca7b97709fcd51f9b233541958
Author: Pierre Villard 
Date:   2018-01-25T15:29:10Z

NIFI-4815 - Add EL support to ExecuteProcess




---


[jira] [Commented] (NIFI-4538) Add Process Group information to Search results

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339366#comment-16339366
 ] 

ASF GitHub Bot commented on NIFI-4538:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2364
  
@yuri1969 There appears to be a merge conflict with this branch and the 
current state of master. I'll be unavailable for a couple days but should have 
some time to devote to this review later next week.


> Add Process Group information to Search results
> ---
>
> Key: NIFI-4538
> URL: https://issues.apache.org/jira/browse/NIFI-4538
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Matt Burgess
>Assignee: Yuri
>Priority: Major
> Attachments: Screenshot from 2017-12-23 21-08-45.png, Screenshot from 
> 2017-12-23 21-42-24.png
>
>
> When querying for components in the Search bar, no Process Group (PG) 
> information is displayed. When copies of PGs are made on the canvas, the 
> search results can be hard to navigate, as you may jump into a different PG 
> than what you're looking for.
> I propose adding (conditionally, based on user permissions) the immediate 
> parent PG name and/or ID, as well as the top-level PG. In this case I mean 
> top-level being the highest parent PG except root, unless the component's 
> immediate parent PG is root, in which case it wouldn't need to be displayed 
> (or could be displayed as the root PG, albeit a duplicate of the immediate).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2364: NIFI-4538 - Add Process Group information to...

2018-01-25 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2364
  
@yuri1969 There appears to be a merge conflict with this branch and the 
current state of master. I'll be unavailable for a couple days but should have 
some time to devote to this review later next week.


---


[jira] [Assigned] (NIFI-978) Support parameterized prepared statements in ExecuteSQL

2018-01-25 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-978:
-

Assignee: Matt Burgess

> Support parameterized prepared statements in ExecuteSQL
> ---
>
> Key: NIFI-978
> URL: https://issues.apache.org/jira/browse/NIFI-978
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Daryl Teo
>Assignee: Matt Burgess
>Priority: Minor
>
> PutSQL and ExecuteSQL are highly inconsistent and leads to confusion.
> - PutSQL relies on FlowFile content to execute it's statement.
> - ExecuteSQL relies on SQL Select Command attribute
> - PutSQL supports parameterized statements through sql.args attributes
> - ExecuteSQL relies on Expression Language to insert dynamic properties
> The reliance on expression language for ExecuteSQL may also lead to potential 
> SQL injection if one is not careful as it is a string replacement.
> Therefore in the interest of reliability and consistency I highly recommend 
> that the SQL processors be standardised.
> Note: I prefer the sql command attribute for running SQL as opposed to the 
> (lower visibility) content based command specification. Having the query 
> attribute of ExecuteSQL, with the sql.args attributes of PutSQL would be a 
> great improvement. If you support this, I will create a new issue in Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (MINIFICPP-382) Add SUSE support to bootstrap process.

2018-01-25 Thread marco polo (JIRA)

 [ 
https://issues.apache.org/jira/browse/MINIFICPP-382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

marco polo updated MINIFICPP-382:
-
Description: 
Add support to bootstrap process. 

 

Currently have tested on OpenSUSE and SLES12. 

 

SLES12/OpenSUSE – build and tested, verifying SiteToSite

SLES11 – TBD

  was:
Add support to bootstrap process. 

 

Currently have tested on OpenSUSE and SLES12. 


> Add SUSE support to bootstrap process. 
> ---
>
> Key: MINIFICPP-382
> URL: https://issues.apache.org/jira/browse/MINIFICPP-382
> Project: NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: marco polo
>Assignee: marco polo
>Priority: Major
> Fix For: 0.5.0
>
>
> Add support to bootstrap process. 
>  
> Currently have tested on OpenSUSE and SLES12. 
>  
> SLES12/OpenSUSE – build and tested, verifying SiteToSite
> SLES11 – TBD



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4395) GenerateTableFetch can't fetch column type by state after instance reboot

2018-01-25 Thread Sanjay Saini (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339346#comment-16339346
 ] 

Sanjay Saini commented on NIFI-4395:


Yes, I think you should reopen the issue because the fix is partial.

> GenerateTableFetch can't fetch column type by state after instance reboot
> -
>
> Key: NIFI-4395
> URL: https://issues.apache.org/jira/browse/NIFI-4395
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Deon Huang
>Assignee: Deon Huang
>Priority: Major
> Fix For: 1.4.0
>
> Attachments: GenerateTableFetch_Exception.png
>
>
> The problem can easily be reproduce.
> Once GenerateTableFetch store state and encounter NiFi instance reboot.
> (Dynamic naming table by expression language)
> The exception will occur.
> The error in source code is list below.
> ```
> if (type == null) {
> // This shouldn't happen as we are populating columnTypeMap when the 
> processor is scheduled or when the first maximum is observed
> throw new IllegalArgumentException("No column type found for: " + 
> colName);
> }
> ```
> When this situation happened. The FlowFile will also be grab and can't 
> release or observed.
> Processor can't grab existing  column type from *columnTypeMap* through 
> instance reboot.
> Hence will inevidible get this exception, rollback FlowFile and never success.
> QueryDatabaseTable processor will not encounter this exception due to it 
> setup(context) every time,
> While GenerateTableFetch will not pass the condition and thus try to fetch 
> column type from 0 length columnTypeMap.
> ---
> if (!isDynamicTableName && !isDynamicMaxValues) {
> super.setup(context);
> }
> ---
> I can take the issue if it is recognize as bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4801) Rest-api swagger definition produces non-functional template import in Python

2018-01-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339342#comment-16339342
 ] 

ASF subversion and git services commented on NIFI-4801:
---

Commit c4e2ac7cda45591d357519217687aa7979bae8fb in nifi's branch 
refs/heads/master from [~kdoran]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=c4e2ac7 ]

NIFI-4801 Fixes Swagger spec for uploadTemplate. This closes #2428


> Rest-api swagger definition produces non-functional template import in Python
> -
>
> Key: NIFI-4801
> URL: https://issues.apache.org/jira/browse/NIFI-4801
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: Python 2.7/3.6
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: corrected-swagger.json
>
>
> The swagger.json produced when compiling NiFi-1.5.0 results in a 
> ProcessgroupsApi().upload_template function that only accepts the id of the 
> Process Group to receive the template, and no option to specify the template 
> itself.
> It would appear that the underlying API call expects the template to be the 
> body of the request, but the produced function does not allow it to be 
> specified. This is changed from NiFi-1.2.0 where a  'template' keyword 
> argument was included.
> It may also be related to how the TemplatesApi().export_template function 
> used to produce a TemplateDTO and now produces a string. 
> I am unsure in which version since 1.2.0 this changed, it may not 
> specifically be just 1.5.0 code.
> An example of the procedurally generated code can be found at:
> [https://github.com/Chaffelson/nifi-python-swagger-client/blob/master/swagger_client/apis/processgroups_api.py]
> And documentation at:
> [http://nifi-python-swagger-client.readthedocs.io/en/latest/ProcessgroupsApi/#upload_template]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4801) Rest-api swagger definition produces non-functional template import in Python

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339345#comment-16339345
 ] 

ASF GitHub Bot commented on NIFI-4801:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2428


> Rest-api swagger definition produces non-functional template import in Python
> -
>
> Key: NIFI-4801
> URL: https://issues.apache.org/jira/browse/NIFI-4801
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: Python 2.7/3.6
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: corrected-swagger.json
>
>
> The swagger.json produced when compiling NiFi-1.5.0 results in a 
> ProcessgroupsApi().upload_template function that only accepts the id of the 
> Process Group to receive the template, and no option to specify the template 
> itself.
> It would appear that the underlying API call expects the template to be the 
> body of the request, but the produced function does not allow it to be 
> specified. This is changed from NiFi-1.2.0 where a  'template' keyword 
> argument was included.
> It may also be related to how the TemplatesApi().export_template function 
> used to produce a TemplateDTO and now produces a string. 
> I am unsure in which version since 1.2.0 this changed, it may not 
> specifically be just 1.5.0 code.
> An example of the procedurally generated code can be found at:
> [https://github.com/Chaffelson/nifi-python-swagger-client/blob/master/swagger_client/apis/processgroups_api.py]
> And documentation at:
> [http://nifi-python-swagger-client.readthedocs.io/en/latest/ProcessgroupsApi/#upload_template]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi pull request #2428: NIFI-4801 Fixes Swagger spec for uploadTemplate

2018-01-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2428


---


[jira] [Updated] (NIFI-4801) Rest-api swagger definition produces non-functional template import in Python

2018-01-25 Thread Matt Gilman (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Gilman updated NIFI-4801:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Rest-api swagger definition produces non-functional template import in Python
> -
>
> Key: NIFI-4801
> URL: https://issues.apache.org/jira/browse/NIFI-4801
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: Python 2.7/3.6
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: corrected-swagger.json
>
>
> The swagger.json produced when compiling NiFi-1.5.0 results in a 
> ProcessgroupsApi().upload_template function that only accepts the id of the 
> Process Group to receive the template, and no option to specify the template 
> itself.
> It would appear that the underlying API call expects the template to be the 
> body of the request, but the produced function does not allow it to be 
> specified. This is changed from NiFi-1.2.0 where a  'template' keyword 
> argument was included.
> It may also be related to how the TemplatesApi().export_template function 
> used to produce a TemplateDTO and now produces a string. 
> I am unsure in which version since 1.2.0 this changed, it may not 
> specifically be just 1.5.0 code.
> An example of the procedurally generated code can be found at:
> [https://github.com/Chaffelson/nifi-python-swagger-client/blob/master/swagger_client/apis/processgroups_api.py]
> And documentation at:
> [http://nifi-python-swagger-client.readthedocs.io/en/latest/ProcessgroupsApi/#upload_template]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4801) Rest-api swagger definition produces non-functional template import in Python

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339344#comment-16339344
 ] 

ASF GitHub Bot commented on NIFI-4801:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2428
  
Thanks @kevdoran! This has been merged to master.


> Rest-api swagger definition produces non-functional template import in Python
> -
>
> Key: NIFI-4801
> URL: https://issues.apache.org/jira/browse/NIFI-4801
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: Python 2.7/3.6
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: corrected-swagger.json
>
>
> The swagger.json produced when compiling NiFi-1.5.0 results in a 
> ProcessgroupsApi().upload_template function that only accepts the id of the 
> Process Group to receive the template, and no option to specify the template 
> itself.
> It would appear that the underlying API call expects the template to be the 
> body of the request, but the produced function does not allow it to be 
> specified. This is changed from NiFi-1.2.0 where a  'template' keyword 
> argument was included.
> It may also be related to how the TemplatesApi().export_template function 
> used to produce a TemplateDTO and now produces a string. 
> I am unsure in which version since 1.2.0 this changed, it may not 
> specifically be just 1.5.0 code.
> An example of the procedurally generated code can be found at:
> [https://github.com/Chaffelson/nifi-python-swagger-client/blob/master/swagger_client/apis/processgroups_api.py]
> And documentation at:
> [http://nifi-python-swagger-client.readthedocs.io/en/latest/ProcessgroupsApi/#upload_template]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2428: NIFI-4801 Fixes Swagger spec for uploadTemplate

2018-01-25 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2428
  
Thanks @kevdoran! This has been merged to master.


---


[jira] [Created] (NIFI-4815) Add EL support to ExecuteProcess

2018-01-25 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-4815:


 Summary: Add EL support to ExecuteProcess
 Key: NIFI-4815
 URL: https://issues.apache.org/jira/browse/NIFI-4815
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Pierre Villard
Assignee: Pierre Villard


ExecuteProcess does not support EL for 'command' and 'working dir' properties. 
That would be useful when promoting workflows between environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (MINIFICPP-382) Add SUSE support to bootstrap process.

2018-01-25 Thread marco polo (JIRA)
marco polo created MINIFICPP-382:


 Summary: Add SUSE support to bootstrap process. 
 Key: MINIFICPP-382
 URL: https://issues.apache.org/jira/browse/MINIFICPP-382
 Project: NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: marco polo
Assignee: marco polo
 Fix For: 0.5.0


Add support to bootstrap process. 

 

Currently have tested on OpenSUSE and SLES12. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4801) Rest-api swagger definition produces non-functional template import in Python

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339299#comment-16339299
 ] 

ASF GitHub Bot commented on NIFI-4801:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2428
  
Will review...


> Rest-api swagger definition produces non-functional template import in Python
> -
>
> Key: NIFI-4801
> URL: https://issues.apache.org/jira/browse/NIFI-4801
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: Python 2.7/3.6
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: corrected-swagger.json
>
>
> The swagger.json produced when compiling NiFi-1.5.0 results in a 
> ProcessgroupsApi().upload_template function that only accepts the id of the 
> Process Group to receive the template, and no option to specify the template 
> itself.
> It would appear that the underlying API call expects the template to be the 
> body of the request, but the produced function does not allow it to be 
> specified. This is changed from NiFi-1.2.0 where a  'template' keyword 
> argument was included.
> It may also be related to how the TemplatesApi().export_template function 
> used to produce a TemplateDTO and now produces a string. 
> I am unsure in which version since 1.2.0 this changed, it may not 
> specifically be just 1.5.0 code.
> An example of the procedurally generated code can be found at:
> [https://github.com/Chaffelson/nifi-python-swagger-client/blob/master/swagger_client/apis/processgroups_api.py]
> And documentation at:
> [http://nifi-python-swagger-client.readthedocs.io/en/latest/ProcessgroupsApi/#upload_template]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2428: NIFI-4801 Fixes Swagger spec for uploadTemplate

2018-01-25 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2428
  
Will review...


---


[jira] [Commented] (NIFIREG-129) swagger spec missing snapshot_metadata param for bucket_flows_api

2018-01-25 Thread Daniel Chaffelson (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339253#comment-16339253
 ] 

Daniel Chaffelson commented on NIFIREG-129:
---

The generated client is available here for you to look at: 
https://github.com/Chaffelson/nipyapi/blob/master/nipyapi/registry/apis/bucket_flows_api.py

> swagger spec missing snapshot_metadata param for bucket_flows_api
> -
>
> Key: NIFIREG-129
> URL: https://issues.apache.org/jira/browse/NIFIREG-129
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0, 0.2.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
>
> When calling methods to retrieve versioned flows from the bucket_flows_api ( 
> get_flow, get_flow_version, get_latest_flow_version, etc) the spec calls for 
> a bucket_id and a flow_id, however the endpoint returns the error "Invalid 
> value for `snapshot_metadata`, must not be `None`"
> Presumably there is a parameter missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-129) swagger spec missing snapshot_metadata param for bucket_flows_api

2018-01-25 Thread Daniel Chaffelson (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339249#comment-16339249
 ] 

Daniel Chaffelson commented on NIFIREG-129:
---

Here's the Python output - there's nothing in the nifi-registry-app.log because 
it's not getting out of the client.
{noformat}
def get_latest_flow_ver(bucket_id, flow_id):
try:
return registry.BucketFlowsApi().get_latest_flow_version(
bucket_id, flow_id
)  
except ApiExceptionR as e:
raise ValueError(e.body)
>>versioning.get_latest_flow_ver(bucket.identifier, flow.identifier)
Traceback (most recent call last):
File "", line 1, in 
File "/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/versioning.py", line 230, 
in get_latest_flow_ver
bucket_id, flow_id
File 
"/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/registry/apis/bucket_flows_api.py",
 line 853, in get_latest_flow_version
(data) = self.get_latest_flow_version_with_http_info(bucket_id, flow_id, 
**kwargs)
File 
"/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/registry/apis/bucket_flows_api.py",
 line 940, in get_latest_flow_version_with_http_info
collection_formats=collection_formats)
File "/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/registry/api_client.py", 
line 326, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/registry/api_client.py", 
line 161, in __call_api
return_data = self.deserialize(response_data, response_type)
File "/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/registry/api_client.py", 
line 239, in deserialize
return self.__deserialize(data, response_type)
File "/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/registry/api_client.py", 
line 279, in __deserialize
return self.__deserialize_model(data, klass)
File "/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/registry/api_client.py", 
line 629, in __deserialize_model
kwargs[attr] = self.__deserialize(value, attr_type)
File "/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/registry/api_client.py", 
line 279, in __deserialize
return self.__deserialize_model(data, klass)
File "/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/registry/api_client.py", 
line 631, in __deserialize_model
instance = klass(**kwargs)
File 
"/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/registry/models/versioned_flow_snapshot.py",
 line 60, in __init__
self.snapshot_metadata = snapshot_metadata
File 
"/Users/dchaffey/PycharmProjects/Nipyapi/nipyapi/registry/models/versioned_flow_snapshot.py",
 line 90, in snapshot_metadata
raise ValueError("Invalid value for `snapshot_metadata`, must not be `None`")
ValueError: Invalid value for `snapshot_metadata`, must not be `None`{noformat}

> swagger spec missing snapshot_metadata param for bucket_flows_api
> -
>
> Key: NIFIREG-129
> URL: https://issues.apache.org/jira/browse/NIFIREG-129
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0, 0.2.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
>
> When calling methods to retrieve versioned flows from the bucket_flows_api ( 
> get_flow, get_flow_version, get_latest_flow_version, etc) the spec calls for 
> a bucket_id and a flow_id, however the endpoint returns the error "Invalid 
> value for `snapshot_metadata`, must not be `None`"
> Presumably there is a parameter missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFIREG-129) swagger spec missing snapshot_metadata param for bucket_flows_api

2018-01-25 Thread Kevin Doran (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFIREG-129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339245#comment-16339245
 ] 

Kevin Doran commented on NIFIREG-129:
-

[~chaffelson] - I don't think this is a missing API parameter. To confirm, you 
are referring to {{GET /buckets//flows}} ? Internally, this endpoint 
composes a response from internal indexes, and you may have come across a bug 
there. Can you post the full HTTP response sent from the server as well as a 
stack trace for the error from nifi-registry-app.log?

> swagger spec missing snapshot_metadata param for bucket_flows_api
> -
>
> Key: NIFIREG-129
> URL: https://issues.apache.org/jira/browse/NIFIREG-129
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0, 0.2.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
>
> When calling methods to retrieve versioned flows from the bucket_flows_api ( 
> get_flow, get_flow_version, get_latest_flow_version, etc) the spec calls for 
> a bucket_id and a flow_id, however the endpoint returns the error "Invalid 
> value for `snapshot_metadata`, must not be `None`"
> Presumably there is a parameter missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (NIFIREG-129) swagger spec missing snapshot_metadata param for bucket_flows_api

2018-01-25 Thread Kevin Doran (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFIREG-129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Doran reassigned NIFIREG-129:
---

Assignee: Kevin Doran

> swagger spec missing snapshot_metadata param for bucket_flows_api
> -
>
> Key: NIFIREG-129
> URL: https://issues.apache.org/jira/browse/NIFIREG-129
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.1.0, 0.2.0
>Reporter: Daniel Chaffelson
>Assignee: Kevin Doran
>Priority: Major
>
> When calling methods to retrieve versioned flows from the bucket_flows_api ( 
> get_flow, get_flow_version, get_latest_flow_version, etc) the spec calls for 
> a bucket_id and a flow_id, however the endpoint returns the error "Invalid 
> value for `snapshot_metadata`, must not be `None`"
> Presumably there is a parameter missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (NIFIREG-129) swagger spec missing snapshot_metadata param for bucket_flows_api

2018-01-25 Thread Daniel Chaffelson (JIRA)
Daniel Chaffelson created NIFIREG-129:
-

 Summary: swagger spec missing snapshot_metadata param for 
bucket_flows_api
 Key: NIFIREG-129
 URL: https://issues.apache.org/jira/browse/NIFIREG-129
 Project: NiFi Registry
  Issue Type: Bug
Affects Versions: 0.1.0, 0.2.0
Reporter: Daniel Chaffelson


When calling methods to retrieve versioned flows from the bucket_flows_api ( 
get_flow, get_flow_version, get_latest_flow_version, etc) the spec calls for a 
bucket_id and a flow_id, however the endpoint returns the error "Invalid value 
for `snapshot_metadata`, must not be `None`"

Presumably there is a parameter missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4289) Implement put processor for InfluxDB

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339135#comment-16339135
 ] 

ASF GitHub Bot commented on NIFI-4289:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2101
  
I'm going to try to get some time today and tomorrow to go through the code 
itself, but I was able to build a flow that uses the line protocol as 
described, and it successfully inserted multiple entries into InfluxDB. So... 
so far, so good.


> Implement put processor for InfluxDB
> 
>
> Key: NIFI-4289
> URL: https://issues.apache.org/jira/browse/NIFI-4289
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Affects Versions: 1.3.0
> Environment: All
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: insert, measurements,, put, timeseries
>
> Support inserting time series measurements into InfluxDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2101: NIFI-4289 - InfluxDB put processor

2018-01-25 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2101
  
I'm going to try to get some time today and tomorrow to go through the code 
itself, but I was able to build a flow that uses the line protocol as 
described, and it successfully inserted multiple entries into InfluxDB. So... 
so far, so good.


---


[jira] [Commented] (NIFI-4367) InvokedScriptedProcessor does not support scripted processor that extends AbstractProcessor

2018-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338956#comment-16338956
 ] 

ASF GitHub Bot commented on NIFI-4367:
--

Github user mgaido91 commented on the issue:

https://github.com/apache/nifi/pull/2201
  
@frett27 I tried your fix, but if you add custom properties to the scripted 
processor, then it results as invalid with the message:
```
... is not a supported property.
```
Thus a more complete UT might help discovering all the problems and solve 
all of them, because this fix at the moment seems not to be enough to make 
`InvokeScriptedProcessor` working with `AbstractProcessor`.


> InvokedScriptedProcessor does not support scripted processor that extends 
> AbstractProcessor
> ---
>
> Key: NIFI-4367
> URL: https://issues.apache.org/jira/browse/NIFI-4367
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Patrice Freydiere
>Priority: Major
>  Labels: InvokeScriptedProcessor, validation
>
> InvokeScriptedProcessor pass his ValidationContext to the inner script, 
> validate call
> InvokeScriptedProcessor Line 465 :final 
> Collection instanceResults = instance.validate(context);
>  
> The problem is that the invokedscript pass the ScriptFile PropertyDescriptor 
> that is validated, if the script derived from the 
> AbstractConfigurableComponent, (because the AbstractConfigurableComponent 
> validate all the context properties).
> The context should be refined to remove the InvokeScriptedProcessor 
> Properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] nifi issue #2201: NIFI-4367 Fix on processor for permit deriving script clas...

2018-01-25 Thread mgaido91
Github user mgaido91 commented on the issue:

https://github.com/apache/nifi/pull/2201
  
@frett27 I tried your fix, but if you add custom properties to the scripted 
processor, then it results as invalid with the message:
```
... is not a supported property.
```
Thus a more complete UT might help discovering all the problems and solve 
all of them, because this fix at the moment seems not to be enough to make 
`InvokeScriptedProcessor` working with `AbstractProcessor`.


---


[jira] [Closed] (NIFI-4804) Duplicated properties when creating a Reporting Task with the REST API

2018-01-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/NIFI-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sébastien Bouchex Bellomié closed NIFI-4804.


> Duplicated properties when creating a Reporting Task with the REST API
> --
>
> Key: NIFI-4804
> URL: https://issues.apache.org/jira/browse/NIFI-4804
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Sébastien Bouchex Bellomié
>Priority: Major
>
> Testcase : I want to create a reporting task with the REST API with postman 
> (chrome extension)
> Url : [https://localhost:9443/nifi-api/controller/reporting-tasks]
> Method : POST
> Body : 
> \{"revision":{"version":0},"component":\{"name":"","type":"org.apache.nifi.metrics.reporting.task.MetricsReportingTask","properties":{"Metric
>  Reporter Service":"x","Process Group ID":""}}}
> Result :
> the returned json mentions 4 properties:
> {noformat}
> "properties": {
> "metric reporter service": null,
> "process group id": null,
> "Metric Reporter Service": "x",
> "Process Group ID": ""
> },{noformat}
> In Nifi, the reporting task is created but the properties are duplicated
>  
> Note : even, if the "Metric Reporter Service" value valid, it does not change 
> anything



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (NIFI-4804) Duplicated properties when creating a Reporting Task with the REST API

2018-01-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/NIFI-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sébastien Bouchex Bellomié resolved NIFI-4804.
--
Resolution: Not A Bug

I was passing the name with Upper Case characters

> Duplicated properties when creating a Reporting Task with the REST API
> --
>
> Key: NIFI-4804
> URL: https://issues.apache.org/jira/browse/NIFI-4804
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Sébastien Bouchex Bellomié
>Priority: Major
>
> Testcase : I want to create a reporting task with the REST API with postman 
> (chrome extension)
> Url : [https://localhost:9443/nifi-api/controller/reporting-tasks]
> Method : POST
> Body : 
> \{"revision":{"version":0},"component":\{"name":"","type":"org.apache.nifi.metrics.reporting.task.MetricsReportingTask","properties":{"Metric
>  Reporter Service":"x","Process Group ID":""}}}
> Result :
> the returned json mentions 4 properties:
> {noformat}
> "properties": {
> "metric reporter service": null,
> "process group id": null,
> "Metric Reporter Service": "x",
> "Process Group ID": ""
> },{noformat}
> In Nifi, the reporting task is created but the properties are duplicated
>  
> Note : even, if the "Metric Reporter Service" value valid, it does not change 
> anything



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)