[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #827: MINIFICPP-1273 - Drain connections on flow shutdown

2020-06-30 Thread GitBox


adamdebreceni commented on a change in pull request #827:
URL: https://github.com/apache/nifi-minifi-cpp/pull/827#discussion_r448154480



##
File path: libminifi/src/core/ProcessGroup.cpp
##
@@ -92,13 +92,12 @@ ProcessGroup::~ProcessGroup() {
 onScheduleTimer_->stop();
   }
 
-  for (auto &&connection : connections_) {
+  for (auto&& connection : connections_) {

Review comment:
   I am torn between always use `auto&&` in range-based for loops and 
never, it is really only useful when the iteration yields a temporary proxy 
object like with `std::vector`, `auto&` could be used, I wouldn't use 
`const auto&` as it makes me think that the connection is const whereas only 
the `std::shared_ptr<...>` is





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (MINIFICPP-1231) MergeContent processor doesn't properly validate properties

2020-06-30 Thread Adam Debreceni (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Debreceni resolved MINIFICPP-1231.
---
Resolution: Fixed

> MergeContent processor doesn't properly validate properties
> ---
>
> Key: MINIFICPP-1231
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1231
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Arpad Boda
>Assignee: Adam Debreceni
>Priority: Major
> Fix For: 0.8.0
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> Properties that require selecting a value ( such as MergeStrategy, 
> MergeFormat, KeepPath, etc) should have proper validation and allowable 
> values should be included in manifest.
> Property validators should be used. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7568) Ensure Kerberos mappings are applied correctly

2020-06-30 Thread Nathan Gough (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Gough updated NIFI-7568:
---
Description: Kerberos mappings are not being applied consistently across 
user actions/JWT authentication token creation. Ensure that the mappings are 
applied in all instances, to ensure that the user database stores a mapped ID. 
Ensure login and logout tokens utilize the mapped identity.  (was: Refactor 
logout code when using JWT based authentication methods such as LDAP and 
Kerberos.)

> Ensure Kerberos mappings are applied correctly
> --
>
> Key: NIFI-7568
> URL: https://issues.apache.org/jira/browse/NIFI-7568
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.11.4
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Kerberos mappings are not being applied consistently across user actions/JWT 
> authentication token creation. Ensure that the mappings are applied in all 
> instances, to ensure that the user database stores a mapped ID. Ensure login 
> and logout tokens utilize the mapped identity.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7568) Ensure Kerberos mappings are applied correctly

2020-06-30 Thread Nathan Gough (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Gough updated NIFI-7568:
---
Summary: Ensure Kerberos mappings are applied correctly  (was: Improve 
token based logout)

> Ensure Kerberos mappings are applied correctly
> --
>
> Key: NIFI-7568
> URL: https://issues.apache.org/jira/browse/NIFI-7568
> Project: Apache NiFi
>  Issue Type: Improvement
>Affects Versions: 1.11.4
>Reporter: Nathan Gough
>Assignee: Nathan Gough
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Refactor logout code when using JWT based authentication methods such as LDAP 
> and Kerberos.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] thenatog opened a new pull request #4377: NIFI-7568

2020-06-30 Thread GitBox


thenatog opened a new pull request #4377:
URL: https://github.com/apache/nifi/pull/4377


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [x] Have you verified that the full build is successful on JDK 8?
   - [x] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (NIFI-2072) Support named captures in ExtractText

2020-06-30 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148956#comment-17148956
 ] 

Otto Fowler edited comment on NIFI-2072 at 6/30/20, 9:21 PM:
-

[~pvillard]

Something like this?  The restriction on the property to enable is:  if you 
want name groups, all your capturing groups MUST be named.  You can't mix named 
and unnamed captures.  If they don't match, it falls back to the old way.

But I haven't written the verify yet either


{code:java}
final String SAMPLE_STRING = 
"foo\r\nbar1\r\nbar2\r\nbar3\r\nhello\r\nworld\r\n";

 @Test
public void testProcessorWithGroupNames() throws Exception {

final TestRunner testRunner = TestRunners.newTestRunner(new 
ExtractText());

testRunner.setProperty("regex.result1", "(?s)(?.*)");
testRunner.setProperty("regex.result2", "(?s).*(?bar1).*");
testRunner.setProperty("regex.result3", "(?s).*?(?bar\\d).*"); 
testRunner.setProperty("regex.result4", 
"(?s).*?(?:bar\\d).*?(?bar\\d).*?(?bar3).*"); 
testRunner.setProperty("regex.result5", "(?s).*(?bar\\d).*"); 
testRunner.setProperty("regex.result6", "(?s)^(?.*)$");
testRunner.setProperty("regex.result7", "(?s)(?XXX)");
testRunner.setProperty(ENABLE_NAMED_GROUPS, "true");
testRunner.enqueue(SAMPLE_STRING.getBytes("UTF-8"));
testRunner.run();

testRunner.assertAllFlowFilesTransferred(ExtractText.REL_MATCH, 1);
final MockFlowFile out = 
testRunner.getFlowFilesForRelationship(ExtractText.REL_MATCH).get(0);
java.util.Map attributes = out.getAttributes();
out.assertAttributeEquals("regex.result1.all", SAMPLE_STRING);
out.assertAttributeEquals("regex.result2.bar1", "bar1");
out.assertAttributeEquals("regex.result3.bar1", "bar1");
out.assertAttributeEquals("regex.result4.bar2", "bar2");
out.assertAttributeEquals("regex.result4.bar2", "bar2");
out.assertAttributeEquals("regex.result4.bar3", "bar3");
out.assertAttributeEquals("regex.result5.bar3", "bar3");
out.assertAttributeEquals("regex.result6.all", SAMPLE_STRING);
out.assertAttributeEquals("regex.result7.miss", null);
}
{code}



was (Author: ottobackwards):
[~pvillard]

Something like this?  The restriction on the property to enable is:  if you 
want name groups, all your capturing groups MUST be named.  You can't mix named 
and unnamed captures.


{code:java}
final String SAMPLE_STRING = 
"foo\r\nbar1\r\nbar2\r\nbar3\r\nhello\r\nworld\r\n";

 @Test
public void testProcessorWithGroupNames() throws Exception {

final TestRunner testRunner = TestRunners.newTestRunner(new 
ExtractText());

testRunner.setProperty("regex.result1", "(?s)(?.*)");
testRunner.setProperty("regex.result2", "(?s).*(?bar1).*");
testRunner.setProperty("regex.result3", "(?s).*?(?bar\\d).*"); 
testRunner.setProperty("regex.result4", 
"(?s).*?(?:bar\\d).*?(?bar\\d).*?(?bar3).*"); 
testRunner.setProperty("regex.result5", "(?s).*(?bar\\d).*"); 
testRunner.setProperty("regex.result6", "(?s)^(?.*)$");
testRunner.setProperty("regex.result7", "(?s)(?XXX)");
testRunner.setProperty(ENABLE_NAMED_GROUPS, "true");
testRunner.enqueue(SAMPLE_STRING.getBytes("UTF-8"));
testRunner.run();

testRunner.assertAllFlowFilesTransferred(ExtractText.REL_MATCH, 1);
final MockFlowFile out = 
testRunner.getFlowFilesForRelationship(ExtractText.REL_MATCH).get(0);
java.util.Map attributes = out.getAttributes();
out.assertAttributeEquals("regex.result1.all", SAMPLE_STRING);
out.assertAttributeEquals("regex.result2.bar1", "bar1");
out.assertAttributeEquals("regex.result3.bar1", "bar1");
out.assertAttributeEquals("regex.result4.bar2", "bar2");
out.assertAttributeEquals("regex.result4.bar2", "bar2");
out.assertAttributeEquals("regex.result4.bar3", "bar3");
out.assertAttributeEquals("regex.result5.bar3", "bar3");
out.assertAttributeEquals("regex.result6.all", SAMPLE_STRING);
out.assertAttributeEquals("regex.result7.miss", null);
}
{code}


> Support named captures in ExtractText
> -
>
> Key: NIFI-2072
> URL: https://issues.apache.org/jira/browse/NIFI-2072
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joey Frazee
>Assignee: Otto Fowler
>Priority: Major
>
> ExtractText currently captures and creates attributes using numeric indices 
> (e.g, attribute.name.0, attribute.name.1, etc.) whether or not the capture 
> groups are named, i.e., patterns like (?\w+).
> In addition to being more faithful to the provided regexes, named captures 
> could help simplify data flows becaus

[jira] [Commented] (NIFI-2072) Support named captures in ExtractText

2020-06-30 Thread Otto Fowler (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148956#comment-17148956
 ] 

Otto Fowler commented on NIFI-2072:
---

[~pvillard]

Something like this?  The restriction on the property to enable is:  if you 
want name groups, all your capturing groups MUST be named.  You can't mix named 
and unnamed captures.


{code:java}
final String SAMPLE_STRING = 
"foo\r\nbar1\r\nbar2\r\nbar3\r\nhello\r\nworld\r\n";

 @Test
public void testProcessorWithGroupNames() throws Exception {

final TestRunner testRunner = TestRunners.newTestRunner(new 
ExtractText());

testRunner.setProperty("regex.result1", "(?s)(?.*)");
testRunner.setProperty("regex.result2", "(?s).*(?bar1).*");
testRunner.setProperty("regex.result3", "(?s).*?(?bar\\d).*"); 
testRunner.setProperty("regex.result4", 
"(?s).*?(?:bar\\d).*?(?bar\\d).*?(?bar3).*"); 
testRunner.setProperty("regex.result5", "(?s).*(?bar\\d).*"); 
testRunner.setProperty("regex.result6", "(?s)^(?.*)$");
testRunner.setProperty("regex.result7", "(?s)(?XXX)");
testRunner.setProperty(ENABLE_NAMED_GROUPS, "true");
testRunner.enqueue(SAMPLE_STRING.getBytes("UTF-8"));
testRunner.run();

testRunner.assertAllFlowFilesTransferred(ExtractText.REL_MATCH, 1);
final MockFlowFile out = 
testRunner.getFlowFilesForRelationship(ExtractText.REL_MATCH).get(0);
java.util.Map attributes = out.getAttributes();
out.assertAttributeEquals("regex.result1.all", SAMPLE_STRING);
out.assertAttributeEquals("regex.result2.bar1", "bar1");
out.assertAttributeEquals("regex.result3.bar1", "bar1");
out.assertAttributeEquals("regex.result4.bar2", "bar2");
out.assertAttributeEquals("regex.result4.bar2", "bar2");
out.assertAttributeEquals("regex.result4.bar3", "bar3");
out.assertAttributeEquals("regex.result5.bar3", "bar3");
out.assertAttributeEquals("regex.result6.all", SAMPLE_STRING);
out.assertAttributeEquals("regex.result7.miss", null);
}
{code}


> Support named captures in ExtractText
> -
>
> Key: NIFI-2072
> URL: https://issues.apache.org/jira/browse/NIFI-2072
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Joey Frazee
>Assignee: Otto Fowler
>Priority: Major
>
> ExtractText currently captures and creates attributes using numeric indices 
> (e.g, attribute.name.0, attribute.name.1, etc.) whether or not the capture 
> groups are named, i.e., patterns like (?\w+).
> In addition to being more faithful to the provided regexes, named captures 
> could help simplify data flows because you wouldn't have to add superfluous 
> UpdateAttribute steps which are just renaming the indexed captures to more 
> interpretable names.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7590) CassandraSessionProvider breaks after disable + re-enable

2020-06-30 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi updated NIFI-7590:
--
Component/s: Extensions

> CassandraSessionProvider breaks after disable + re-enable
> -
>
> Key: NIFI-7590
> URL: https://issues.apache.org/jira/browse/NIFI-7590
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If Cassandra processors are using CassandraSessionProvider service and the 
> service is disabled and then re-enabled (typically when one want's to edit 
> it's properties), the service cannot connect to Cassandra any longer and the 
> processor keeps failing.
> Currently the only way to fix this is to restart NiFi.
> The root cause is a bug in the @OnDisabled and @OnEnabled:
> {code:java}
> @OnDisabled
> public void onDisabled(){
> if (cassandraSession != null) {
> cassandraSession.close();
> }
> if (cluster != null) {
> cluster.close();
> }
> }
> @OnEnabled
> public void onEnabled(final ConfigurationContext context) {
> connectToCassandra(context);
> }
> private void connectToCassandra(ConfigurationContext context) {
> if (cluster == null) {
> ...
> {code}
> In @OnDisabled, cluster is _closed_ but _not set to null_.
> In @OnEnabled, it is created _only if it is null_.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7590) CassandraSessionProvider breaks after disable + re-enable

2020-06-30 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi resolved NIFI-7590.
---
Fix Version/s: 1.12.0
   Resolution: Fixed

> CassandraSessionProvider breaks after disable + re-enable
> -
>
> Key: NIFI-7590
> URL: https://issues.apache.org/jira/browse/NIFI-7590
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If Cassandra processors are using CassandraSessionProvider service and the 
> service is disabled and then re-enabled (typically when one want's to edit 
> it's properties), the service cannot connect to Cassandra any longer and the 
> processor keeps failing.
> Currently the only way to fix this is to restart NiFi.
> The root cause is a bug in the @OnDisabled and @OnEnabled:
> {code:java}
> @OnDisabled
> public void onDisabled(){
> if (cassandraSession != null) {
> cassandraSession.close();
> }
> if (cluster != null) {
> cluster.close();
> }
> }
> @OnEnabled
> public void onEnabled(final ConfigurationContext context) {
> connectToCassandra(context);
> }
> private void connectToCassandra(ConfigurationContext context) {
> if (cluster == null) {
> ...
> {code}
> In @OnDisabled, cluster is _closed_ but _not set to null_.
> In @OnEnabled, it is created _only if it is null_.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7590) CassandraSessionProvider breaks after disable + re-enable

2020-06-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148930#comment-17148930
 ] 

ASF subversion and git services commented on NIFI-7590:
---

Commit 197df577ac9ade19dd1c2c807231212757bbd3d7 in nifi's branch 
refs/heads/master from Tamas Palfy
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=197df57 ]

NIFI-7590 In 'CassandraSessionProvider.onDisabled' setting Cassandra-related 
references properly to null after closing them so that they can be renewed in 
'onEnabled' (which creates them only if set to 'null', leaving them closed 
otherwise).

NIFI-7590 Removed 'CassandraSessionProvider.onStopped'.

This closes #4373.

Signed-off-by: Peter Turcsanyi 


> CassandraSessionProvider breaks after disable + re-enable
> -
>
> Key: NIFI-7590
> URL: https://issues.apache.org/jira/browse/NIFI-7590
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Tamas Palfy
>Priority: Major
>
> If Cassandra processors are using CassandraSessionProvider service and the 
> service is disabled and then re-enabled (typically when one want's to edit 
> it's properties), the service cannot connect to Cassandra any longer and the 
> processor keeps failing.
> Currently the only way to fix this is to restart NiFi.
> The root cause is a bug in the @OnDisabled and @OnEnabled:
> {code:java}
> @OnDisabled
> public void onDisabled(){
> if (cassandraSession != null) {
> cassandraSession.close();
> }
> if (cluster != null) {
> cluster.close();
> }
> }
> @OnEnabled
> public void onEnabled(final ConfigurationContext context) {
> connectToCassandra(context);
> }
> private void connectToCassandra(ConfigurationContext context) {
> if (cluster == null) {
> ...
> {code}
> In @OnDisabled, cluster is _closed_ but _not set to null_.
> In @OnEnabled, it is created _only if it is null_.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7590) CassandraSessionProvider breaks after disable + re-enable

2020-06-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148931#comment-17148931
 ] 

ASF subversion and git services commented on NIFI-7590:
---

Commit 197df577ac9ade19dd1c2c807231212757bbd3d7 in nifi's branch 
refs/heads/master from Tamas Palfy
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=197df57 ]

NIFI-7590 In 'CassandraSessionProvider.onDisabled' setting Cassandra-related 
references properly to null after closing them so that they can be renewed in 
'onEnabled' (which creates them only if set to 'null', leaving them closed 
otherwise).

NIFI-7590 Removed 'CassandraSessionProvider.onStopped'.

This closes #4373.

Signed-off-by: Peter Turcsanyi 


> CassandraSessionProvider breaks after disable + re-enable
> -
>
> Key: NIFI-7590
> URL: https://issues.apache.org/jira/browse/NIFI-7590
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Tamas Palfy
>Priority: Major
>
> If Cassandra processors are using CassandraSessionProvider service and the 
> service is disabled and then re-enabled (typically when one want's to edit 
> it's properties), the service cannot connect to Cassandra any longer and the 
> processor keeps failing.
> Currently the only way to fix this is to restart NiFi.
> The root cause is a bug in the @OnDisabled and @OnEnabled:
> {code:java}
> @OnDisabled
> public void onDisabled(){
> if (cassandraSession != null) {
> cassandraSession.close();
> }
> if (cluster != null) {
> cluster.close();
> }
> }
> @OnEnabled
> public void onEnabled(final ConfigurationContext context) {
> connectToCassandra(context);
> }
> private void connectToCassandra(ConfigurationContext context) {
> if (cluster == null) {
> ...
> {code}
> In @OnDisabled, cluster is _closed_ but _not set to null_.
> In @OnEnabled, it is created _only if it is null_.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4373: NIFI-7590 Fix CassandraSessionProvider breaking after disable + re-enable

2020-06-30 Thread GitBox


asfgit closed pull request #4373:
URL: https://github.com/apache/nifi/pull/4373


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-7590) CassandraSessionProvider breaks after disable + re-enable

2020-06-30 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi reassigned NIFI-7590:
-

Assignee: Tamas Palfy

> CassandraSessionProvider breaks after disable + re-enable
> -
>
> Key: NIFI-7590
> URL: https://issues.apache.org/jira/browse/NIFI-7590
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If Cassandra processors are using CassandraSessionProvider service and the 
> service is disabled and then re-enabled (typically when one want's to edit 
> it's properties), the service cannot connect to Cassandra any longer and the 
> processor keeps failing.
> Currently the only way to fix this is to restart NiFi.
> The root cause is a bug in the @OnDisabled and @OnEnabled:
> {code:java}
> @OnDisabled
> public void onDisabled(){
> if (cassandraSession != null) {
> cassandraSession.close();
> }
> if (cluster != null) {
> cluster.close();
> }
> }
> @OnEnabled
> public void onEnabled(final ConfigurationContext context) {
> connectToCassandra(context);
> }
> private void connectToCassandra(ConfigurationContext context) {
> if (cluster == null) {
> ...
> {code}
> In @OnDisabled, cluster is _closed_ but _not set to null_.
> In @OnEnabled, it is created _only if it is null_.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] mattyb149 opened a new pull request #4376: NIFI-7592: Allow NiFi to be started without a GUI/REST interface

2020-06-30 Thread GitBox


mattyb149 opened a new pull request #4376:
URL: https://github.com/apache/nifi/pull/4376


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Refactors the framework code to move Web/UI-related stuff into its own NAR, 
and to use ServiceLoader to find a NiFiServer implementation rather than 
hardcoding the JettyServer class as the required implementation (and thus a 
required NAR).
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7592) Allow NiFi to be started without a GUI/REST interface

2020-06-30 Thread Matt Burgess (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148927#comment-17148927
 ] 

Matt Burgess commented on NIFI-7592:


I propose to replace the need for a JettyServer (which is the NiFi class that 
loads all the Web/UI WARs, etc.), we instead use its parent interface 
NiFiServer, and rather than use reflection and require the need for the 
nifi-jetty NAR, we instead move the NiFi JettyServer-specific code out to a 
"nifi-server-nar". Then as an alternative we could add a "minifi-server-nar" 
for example, that also implements NiFiServer but does similar startup code as 
MiNiFi does today. The NiFi assembly would package only nifi-server-nar for 
now, but later as MINIFI-422 progresses, we would have a MiNiFi assembly that 
instead packages the minifi-server-nar, which doesn't require all the Web/UI 
stuff.

This would allow us to replace the coupling of the web stuff with a 
ServiceLoader that can search the NARs for a NiFiServer implementation. Exactly 
one would be allowed, zero or more than one should result in failure to start.

> Allow NiFi to be started without a GUI/REST interface
> -
>
> Key: NIFI-7592
> URL: https://issues.apache.org/jira/browse/NIFI-7592
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> In conjunction with MINIFI-422 (bringing MiNiFi into the NiFi codebase), it 
> would be necessary to allow a NiFi build to run without having the GUI and 
> REST API components required. For normal NiFi releases, the GUI would be 
> included in the assembly, but this Jira proposes to reorganize and refactor 
> the framework code to allow NiFi to run flows and such without requiring the 
> web bundle.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7592) Allow NiFi to be started without a GUI/REST interface

2020-06-30 Thread Matt Burgess (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess reassigned NIFI-7592:
--

Assignee: Matt Burgess

> Allow NiFi to be started without a GUI/REST interface
> -
>
> Key: NIFI-7592
> URL: https://issues.apache.org/jira/browse/NIFI-7592
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Reporter: Matt Burgess
>Assignee: Matt Burgess
>Priority: Major
>
> In conjunction with MINIFI-422 (bringing MiNiFi into the NiFi codebase), it 
> would be necessary to allow a NiFi build to run without having the GUI and 
> REST API components required. For normal NiFi releases, the GUI would be 
> included in the assembly, but this Jira proposes to reorganize and refactor 
> the framework code to allow NiFi to run flows and such without requiring the 
> web bundle.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7592) Allow NiFi to be started without a GUI/REST interface

2020-06-30 Thread Matt Burgess (Jira)
Matt Burgess created NIFI-7592:
--

 Summary: Allow NiFi to be started without a GUI/REST interface
 Key: NIFI-7592
 URL: https://issues.apache.org/jira/browse/NIFI-7592
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Reporter: Matt Burgess


In conjunction with MINIFI-422 (bringing MiNiFi into the NiFi codebase), it 
would be necessary to allow a NiFi build to run without having the GUI and REST 
API components required. For normal NiFi releases, the GUI would be included in 
the assembly, but this Jira proposes to reorganize and refactor the framework 
code to allow NiFi to run flows and such without requiring the web bundle.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7493) XML Schema Inference can infer a type of String when it should be Record

2020-06-30 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-7493:
-
Fix Version/s: 1.12.0

> XML Schema Inference can infer a type of String when it should be Record
> 
>
> Key: NIFI-7493
> URL: https://issues.apache.org/jira/browse/NIFI-7493
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> From the mailing list:
> {quote}I have configured a XMLReader to use the Infer Schema. The other issue 
> is that I have problems converting sub records. My records looks something 
> like this:        John Doe    
> some there                
> workingman                            
> New York
>             A Company
>         
>     
> 
>  
> The issues are with the subrecords in part 3. I have configured the XMLReader 
> property "Field Name for Content" = value
>  
> When the data is being converted via a XMLWriter the output for the 
> additionalInfo fields looks like this:
>             MapRecord[\{name=Location, 
> value=New York}]
>         MapRecord[\{name=Company, value=A 
> Company}]    
> 
>  
>  
> If I use a JSONWriter I gets this:
> "Part3": {    "Details": {
>     "additionalInfo": [ "MapRecord[\{name=Location, value=New York}]", 
> "MapRecord[\{name=Company, value=A Company}]" ]
>     }
> }{quote}
> The issue appears to be that "additionalInfo" is being inferred as a String, 
> but the XML Reader is returning a Record.
>   
>  This is probably because the "additionalInfo" element contains String 
> content and no child nodes. However, it does have attributes. As a result, 
> the XML Reader will return a Record. I'm guessing that attributes are not 
> taken into account in the schema inference, though, and since 
> "additionalInfo" has no child nodes but has textual content, it must be a 
> String.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7493) XML Schema Inference can infer a type of String when it should be Record

2020-06-30 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-7493:
-
Status: Patch Available  (was: Open)

> XML Schema Inference can infer a type of String when it should be Record
> 
>
> Key: NIFI-7493
> URL: https://issues.apache.org/jira/browse/NIFI-7493
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Mark Payne
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> From the mailing list:
> {quote}I have configured a XMLReader to use the Infer Schema. The other issue 
> is that I have problems converting sub records. My records looks something 
> like this:        John Doe    
> some there                
> workingman                            
> New York
>             A Company
>         
>     
> 
>  
> The issues are with the subrecords in part 3. I have configured the XMLReader 
> property "Field Name for Content" = value
>  
> When the data is being converted via a XMLWriter the output for the 
> additionalInfo fields looks like this:
>             MapRecord[\{name=Location, 
> value=New York}]
>         MapRecord[\{name=Company, value=A 
> Company}]    
> 
>  
>  
> If I use a JSONWriter I gets this:
> "Part3": {    "Details": {
>     "additionalInfo": [ "MapRecord[\{name=Location, value=New York}]", 
> "MapRecord[\{name=Company, value=A Company}]" ]
>     }
> }{quote}
> The issue appears to be that "additionalInfo" is being inferred as a String, 
> but the XML Reader is returning a Record.
>   
>  This is probably because the "additionalInfo" element contains String 
> content and no child nodes. However, it does have attributes. As a result, 
> the XML Reader will return a Record. I'm guessing that attributes are not 
> taken into account in the schema inference, though, and since 
> "additionalInfo" has no child nodes but has textual content, it must be a 
> String.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] markap14 opened a new pull request #4375: NIFI-7493: When inferring schema for XML data, if we find a text elem…

2020-06-30 Thread GitBox


markap14 opened a new pull request #4375:
URL: https://github.com/apache/nifi/pull/4375


   …ent that also has attributes, infer it as a Record type, in order to match 
how the data will be read when using the XML Reader
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #827: MINIFICPP-1273 - Drain connections on flow shutdown

2020-06-30 Thread GitBox


szaszm commented on a change in pull request #827:
URL: https://github.com/apache/nifi-minifi-cpp/pull/827#discussion_r447924329



##
File path: libminifi/src/core/ProcessGroup.cpp
##
@@ -92,13 +92,12 @@ ProcessGroup::~ProcessGroup() {
 onScheduleTimer_->stop();
   }
 
-  for (auto &&connection : connections_) {
+  for (auto&& connection : connections_) {

Review comment:
   That's a forwarding reference, because `auto` uses template deduction 
rules. In other words, "I don't care what it is, bind a reference to it" 
reference.
   In this case it will be a const lvalue reference, because:
   1. `auto&&`
   2. `const std::shared_ptr& &&` after deducing `auto` (note: 
`connections_` is a `std::set`, which only has const iterators)
   3. `const std::shared_ptr&` after reference-collapsing
   
   Normally I point out that `const auto&` or `auto&` would be more explicit 
and readable, but since this is old code with just a space change, I didn't 
want to bother @adamdebreceni with this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7579) Create a GetS3Object Processor

2020-06-30 Thread ArpStorm1 (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148915#comment-17148915
 ] 

ArpStorm1 commented on NIFI-7579:
-

The problem with the List/Fetch pattern regarding S3 is the need to first list 
all the objects, and the list operation can be very heavy.
S3 is a common standard today of Object storage, and not only Amazon 
implemented it.
Using listS3 processor can create heavy workload on the backend storage, 
resulting in slow answer which can fail the entire flow process. 
And sometimes that can be avoided by getting the exact object the user needs.
GetS3Object not has to be the solution - maybe implement this logic to the 
FetchS3Object processor would be enough. 

> Create a GetS3Object Processor
> --
>
> Key: NIFI-7579
> URL: https://issues.apache.org/jira/browse/NIFI-7579
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: ArpStorm1
>Assignee: YoungGyu Chun
>Priority: Major
>
> Sometimes the client needs to get only specific object or a subset of objects 
> from its bucket. Now, the only way to do it is using ListS3 Processor and 
> after that using FetchS3Object processor. Creating a GetS3Object processor 
> for such cases can be great 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7572) Add a ScriptedTransformRecord processor

2020-06-30 Thread Mark Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Payne updated NIFI-7572:
-
Status: Patch Available  (was: Open)

> Add a ScriptedTransformRecord processor
> ---
>
> Key: NIFI-7572
> URL: https://issues.apache.org/jira/browse/NIFI-7572
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Extensions
>Reporter: Mark Payne
>Assignee: Mark Payne
>Priority: Major
>
> NiFi has started to put a heavier emphasis on Record-oriented processors, as 
> they provide many benefits including better performance and a better UX over 
> their purely byte-oriented counterparts. It is common to see users wanting to 
> transform a Record in some very specific way, but NiFi doesn't make this as 
> easy as it should. There are methods using ExecuteScript, 
> InvokedScriptedProcessor, ScriptedRecordWriter, and ScriptedRecordReader for 
> instance.
> But each of these requires that the Script writer understand a lot about NiFi 
> and how to expose properties, create Property Descriptors, etc. and for 
> fairly simple transformation we end up with scripts where the logic takes 
> fewer lines of code than the boilerplate.
> We should expose a Processor that allows a user to write a script that takes 
> a Record and transforms that Record in some way. The processor should be 
> configured with the following:
>  * Record Reader (required)
>  * Record Writer (required)
>  * Script Language (required)
>  * Script Body or Script File (one and only one of these required)
> The script should implement a single method along the lines of:
> {code:java}
> Record transform(Record input) throws Exception; {code}
> If the script returns null, the input Record should be dropped. Otherwise, 
> whatever Record is returned should be written to the Record Writer.
> The processor should have two relationships: "success" and "failure."
> The script should not be allowed to expose any properties or define any 
> relationships. The point is to keep the script focused purely on processing 
> the record itself.
> It's not entirely clear to me how easy the Record API works with some of the 
> scripting languages. The Record object does expose a method named toMap() 
> that returns a Map containing the underlying key/value pairs. 
> However, the values in that Map may themselves be Records. It might make 
> sense to expose a new method toNormalizedMap() or something along those lines 
> that would return a Map where the values have been 
> recursively normalized, in much the same way that we do for 
> JoltTransformRecord. This would perhaps allow for cleaner syntax, but I'm not 
> a scripting expert so I can't say for sure whether such a method is necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7587) PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers is unstable

2020-06-30 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-7587:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers 
> is unstable
> -
>
> Key: NIFI-7587
> URL: https://issues.apache.org/jira/browse/NIFI-7587
> Project: Apache NiFi
>  Issue Type: Test
>Reporter: Joe Witt
>Assignee: Andy LoPresto
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Unstable and/or broken test or code
> [ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 22.765 s <<< FAILURE! - in org.apache.nifi.remote.client.PeerSelectorTest
> [ERROR] 
> testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers(org.apache.nifi.remote.client.PeerSelectorTest)
>   Time elapsed: 1.095 s  <<< FAILURE!
> org.codehaus.groovy.runtime.powerassert.PowerAssertionError: 
> assert count >= lowerBound && count <= upperBound
>| |  |  |  | |  |
>5103  |  4900.0 |  5103  |  5100.0
>  true  falsefalse
>   at 
> org.apache.nifi.remote.client.PeerSelectorTest.assertDistributionPercentages(PeerSelectorTest.groovy:147)
>   at 
> org.apache.nifi.remote.client.PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers(PeerSelectorTest.groovy:759)
> [ERROR] Failures: 
> [ERROR]   
> PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers:759->assertDistributionPercentages:147
>  assert count >= lowerBound && count <= upperBound
>| |  |  |  | |  |
>5103  |  4900.0 |  5103  |  5100.0
>  true  falsefalse
> [ERROR] Tests run: 92, Failures: 1, Errors: 0, Skipped: 2
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test (default-test) on 
> project nifi-site-to-site-client: There are test failures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7587) PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers is unstable

2020-06-30 Thread Andy LoPresto (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-7587:

Status: Patch Available  (was: Open)

> PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers 
> is unstable
> -
>
> Key: NIFI-7587
> URL: https://issues.apache.org/jira/browse/NIFI-7587
> Project: Apache NiFi
>  Issue Type: Test
>Reporter: Joe Witt
>Assignee: Andy LoPresto
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Unstable and/or broken test or code
> [ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 22.765 s <<< FAILURE! - in org.apache.nifi.remote.client.PeerSelectorTest
> [ERROR] 
> testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers(org.apache.nifi.remote.client.PeerSelectorTest)
>   Time elapsed: 1.095 s  <<< FAILURE!
> org.codehaus.groovy.runtime.powerassert.PowerAssertionError: 
> assert count >= lowerBound && count <= upperBound
>| |  |  |  | |  |
>5103  |  4900.0 |  5103  |  5100.0
>  true  falsefalse
>   at 
> org.apache.nifi.remote.client.PeerSelectorTest.assertDistributionPercentages(PeerSelectorTest.groovy:147)
>   at 
> org.apache.nifi.remote.client.PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers(PeerSelectorTest.groovy:759)
> [ERROR] Failures: 
> [ERROR]   
> PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers:759->assertDistributionPercentages:147
>  assert count >= lowerBound && count <= upperBound
>| |  |  |  | |  |
>5103  |  4900.0 |  5103  |  5100.0
>  true  falsefalse
> [ERROR] Tests run: 92, Failures: 1, Errors: 0, Skipped: 2
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test (default-test) on 
> project nifi-site-to-site-client: There are test failures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7591) Allow PutS3Object to post to AWS Snowball

2020-06-30 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi reassigned NIFI-7591:
-

Assignee: Peter Turcsanyi

> Allow PutS3Object to post to AWS Snowball
> -
>
> Key: NIFI-7591
> URL: https://issues.apache.org/jira/browse/NIFI-7591
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Juan C. Sequeiros
>Assignee: Peter Turcsanyi
>Priority: Major
>
> When posting using PutS3Object to AWS SNOWBALL [1]it fails with "
> In short it supports the AWS s3 API in a limited function [2]
> [1][https://aws.amazon.com/getting-started/projects/migrate-petabyte-scale-data/faq/]
> [2][https://docs.aws.amazon.com/snowball/latest/developer-guide/using-adapter.html]
> {code:java}
> Chunk encoding is not supported yet (Service: Amazon S3; Status Code: 501; 
> Error Code: NotImplemented; Request ID: null; S3 Extended Request ID: null)   
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1695)
>          at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1350)
>          at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1101)
>          at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:758)
>          at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:732)
>          at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:714)
>          at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:674)
>          at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:656)
>          at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:520)        
>  at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4705) 
>         at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4652)     
>     at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1807)  
>        at 
> org.apache.nifi.processors.aws.s3.PutS3Object$1.process(PutS3Object.java:504) 
>         at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2212)
>          at 
> org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2180)
>          at 
> org.apache.nifi.processors.aws.s3.PutS3Object.onTrigger(PutS3Object.java:443) 
>         at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>          at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1162)
>          at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:209)
>          at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
>          at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)      
>    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>          at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>          at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>          at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>          at java.lang.Thread.run(Thread.java:748) {code}
> {code}
> AWS docs state this: [2]
> "If your solution uses the AWS SDK for Java version 1.11.0 or newer, you must 
> use the following S3ClientOptions" 
> {code:java}
> disableChunkedEncoding() – Indicates that chunked encoding is not supported 
> with the adapter.
> setPathStyleAccess(true) – Configures the adapter to use path-style access 
> for all requests.
> [1]https://aws.amazon.com/getting-started/projects/migrate-petabyte-scale-data/faq/
> [2]https://docs.aws.amazon.com/snowball/latest/developer-guide/using-adapter.html{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1278) Add Python processor tests to CI

2020-06-30 Thread Arpad Boda (Jira)
Arpad Boda created MINIFICPP-1278:
-

 Summary: Add Python processor tests to CI
 Key: MINIFICPP-1278
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1278
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Arpad Boda
Assignee: Arpad Boda
 Fix For: 0.8.0


As Python processor tests has been introduced lately (thanks to [~hunyadi]), 
these should be part of at least one CI job. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-7591) Allow PutS3Object to post to AWS Snowball

2020-06-30 Thread Juan C. Sequeiros (Jira)
Juan C. Sequeiros created NIFI-7591:
---

 Summary: Allow PutS3Object to post to AWS Snowball
 Key: NIFI-7591
 URL: https://issues.apache.org/jira/browse/NIFI-7591
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Reporter: Juan C. Sequeiros


When posting using PutS3Object to AWS SNOWBALL [1]it fails with "
In short it supports the AWS s3 API in a limited function [2]

[1][https://aws.amazon.com/getting-started/projects/migrate-petabyte-scale-data/faq/]

[2][https://docs.aws.amazon.com/snowball/latest/developer-guide/using-adapter.html]
{code:java}
Chunk encoding is not supported yet (Service: Amazon S3; Status Code: 501; 
Error Code: NotImplemented; Request ID: null; S3 Extended Request ID: null)     
    at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1695)
         at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1350)
         at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1101)
         at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:758)
         at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:732)
         at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:714)
         at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:674)
         at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:656)
         at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:520)         
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4705)    
     at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4652)       
  at 
com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1807)    
     at 
org.apache.nifi.processors.aws.s3.PutS3Object$1.process(PutS3Object.java:504)   
      at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2212)
         at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2180)
         at 
org.apache.nifi.processors.aws.s3.PutS3Object.onTrigger(PutS3Object.java:443)   
      at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
         at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1162)
         at 
org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:209)
         at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
         at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)        
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)     
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)         
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
         at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
        at java.lang.Thread.run(Thread.java:748) {code}
{code}
AWS docs state this: [2]

"If your solution uses the AWS SDK for Java version 1.11.0 or newer, you must 
use the following S3ClientOptions" 
{code:java}
disableChunkedEncoding() – Indicates that chunked encoding is not supported 
with the adapter.
setPathStyleAccess(true) – Configures the adapter to use path-style access for 
all requests.


[1]https://aws.amazon.com/getting-started/projects/migrate-petabyte-scale-data/faq/
[2]https://docs.aws.amazon.com/snowball/latest/developer-guide/using-adapter.html{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] alopresto opened a new pull request #4372: NIFI-7587 Increased tolerance for non-deterministic unit test.

2020-06-30 Thread GitBox


alopresto opened a new pull request #4372:
URL: https://github.com/apache/nifi/pull/4372


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _The S2S peer selection process is non-deterministic. I increased the 
tolerance for one of the tests, as it failed during a recent GitHub CI/CD 
build._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [x] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [x] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [x] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [x] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [x] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (NIFI-7587) PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers is unstable

2020-06-30 Thread Andy LoPresto (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto reassigned NIFI-7587:
---

Assignee: Andy LoPresto

> PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers 
> is unstable
> -
>
> Key: NIFI-7587
> URL: https://issues.apache.org/jira/browse/NIFI-7587
> Project: Apache NiFi
>  Issue Type: Test
>Reporter: Joe Witt
>Assignee: Andy LoPresto
>Priority: Major
> Fix For: 1.12.0
>
>
> Unstable and/or broken test or code
> [ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 22.765 s <<< FAILURE! - in org.apache.nifi.remote.client.PeerSelectorTest
> [ERROR] 
> testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers(org.apache.nifi.remote.client.PeerSelectorTest)
>   Time elapsed: 1.095 s  <<< FAILURE!
> org.codehaus.groovy.runtime.powerassert.PowerAssertionError: 
> assert count >= lowerBound && count <= upperBound
>| |  |  |  | |  |
>5103  |  4900.0 |  5103  |  5100.0
>  true  falsefalse
>   at 
> org.apache.nifi.remote.client.PeerSelectorTest.assertDistributionPercentages(PeerSelectorTest.groovy:147)
>   at 
> org.apache.nifi.remote.client.PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers(PeerSelectorTest.groovy:759)
> [ERROR] Failures: 
> [ERROR]   
> PeerSelectorTest.testGetAvailablePeerStatusShouldHandleMultiplePenalizedPeers:759->assertDistributionPercentages:147
>  assert count >= lowerBound && count <= upperBound
>| |  |  |  | |  |
>5103  |  4900.0 |  5103  |  5100.0
>  true  falsefalse
> [ERROR] Tests run: 92, Failures: 1, Errors: 0, Skipped: 2
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test (default-test) on 
> project nifi-site-to-site-client: There are test failures.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] adamfisher commented on pull request #3317: NIFI-6047 Add DetectDuplicateRecord Processor

2020-06-30 Thread GitBox


adamfisher commented on pull request #3317:
URL: https://github.com/apache/nifi/pull/3317#issuecomment-651851517


   @MikeThomsen I tried following your steps for rebasing and I thought it all 
went ok but I seem to have a lot more commits now. Would you be able to advise? 
I'm not a git expert.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] simonbence commented on pull request #4349: NIFI-7549 Adding Hazelcast based DistributedMapCacheClient support

2020-06-30 Thread GitBox


simonbence commented on pull request #4349:
URL: https://github.com/apache/nifi/pull/4349#issuecomment-651846584


   Please hold on, I would like to add some changes



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Wastack opened a new pull request #4371: NIFI-7589 Fix path value when unpacking tar

2020-06-30 Thread GitBox


Wastack opened a new pull request #4371:
URL: https://github.com/apache/nifi/pull/4371


   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Fix for NIFI-7589
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [X] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #821: MINIFICPP-1251 - Implement and test RetryFlowFile processor

2020-06-30 Thread GitBox


hunyadi-dev commented on a change in pull request #821:
URL: https://github.com/apache/nifi-minifi-cpp/pull/821#discussion_r447732749



##
File path: extensions/standard-processors/processors/RetryFlowFile.cpp
##
@@ -0,0 +1,212 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "RetryFlowFile.h"
+
+#include "core/PropertyValidation.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+core::Property 
RetryFlowFile::RetryAttribute(core::PropertyBuilder::createProperty("Retry 
Attribute")
+->withDescription(
+"The name of the attribute that contains the current retry count for 
the FlowFile."
+"WARNING: If the name matches an attribute already on the FlowFile 
that does not contain a numerical value, "
+"the processor will either overwrite that attribute with '1' or fail 
based on configuration.")
+->withDefaultValue("flowfile.retries")
+->supportsExpressionLanguage(true)
+->build());
+
+core::Property 
RetryFlowFile::MaximumRetries(core::PropertyBuilder::createProperty("Maximum 
Retries")
+->withDescription("The maximum number of times a FlowFile can be retried 
before being passed to the 'retries_exceeded' relationship.")
+->withDefaultValue(3)
+->supportsExpressionLanguage(true)
+->build());
+
+core::Property 
RetryFlowFile::PenalizeRetries(core::PropertyBuilder::createProperty("Penalize 
Retries")
+  ->withDescription("If set to 'true', this Processor will penalize input 
FlowFiles before passing them to the 'retry' relationship. This does not apply 
to the 'retries_exceeded' relationship.")
+  ->withDefaultValue(true)
+  ->build());
+
+core::Property 
RetryFlowFile::FailOnNonNumericalOverwrite(core::PropertyBuilder::createProperty("Fail
 on Non-numerical Overwrite")
+->withDescription("If the FlowFile already has the attribute defined in 
'Retry Attribute' that is *not* a number, fail the FlowFile instead of 
resetting that value to '1'")
+->withDefaultValue(false)
+->build());
+
+core::Property 
RetryFlowFile::ReuseMode(core::PropertyBuilder::createProperty("Reuse Mode")
+->withDescription(
+"Defines how the Processor behaves if the retry FlowFile has a 
different retry UUID than "
+"the instance that received the FlowFile. This generally means that 
the attribute was "
+"not reset after being successfully retried by a previous instance of 
this processor.")
+->withAllowableValues({FAIL_ON_REUSE, WARN_ON_REUSE, 
RESET_REUSE})
+->withDefaultValue(FAIL_ON_REUSE)
+->build());
+
+core::Relationship RetryFlowFile::Retry("retry",
+  "Input FlowFile has not exceeded the configured maximum retry count, pass 
this relationship back to the input Processor to create a limited feedback 
loop.");
+core::Relationship RetryFlowFile::RetriesExceeded("retries_exceeded",
+  "Input FlowFile has exceeded the configured maximum retry count, do not pass 
this relationship back to the input Processor to terminate the limited feedback 
loop.");
+core::Relationship RetryFlowFile::Failure("failure",
+"The processor is configured such that a non-numerical value on 'Retry 
Attribute' results in a failure instead of resetting "
+"that value to '1'. This will immediately terminate the limited feedback 
loop. Might also include when 'Maximum Retries' contains "
+" attribute expression language that does not resolve to an Integer.");
+
+void RetryFlowFile::initialize() {
+  setSupportedProperties({
+RetryAttribute,
+MaximumRetries,
+PenalizeRetries,
+FailOnNonNumericalOverwrite,
+ReuseMode,
+  });
+  setSupportedRelationships({
+Retry,
+RetriesExceeded,
+Failure,
+  });
+}
+
+void RetryFlowFile::onSchedule(core::ProcessContext* context, 
core::ProcessSessionFactory* /* sessionFactory */) {
+  context->getProperty(RetryAttribute.getName(), retry_attribute_);
+  context->getProperty(MaximumRetries.getName(), maximum_retries_);
+  context->getProperty(PenalizeRetries.getName(), penalize_retries_);
+  context->getProperty(FailOnNonNumericalOverwrite.getName(), 
fail_on_non_numerical_overwrite_);
+  context->getProperty(ReuseMode.getName(), reuse_

[GitHub] [nifi] Wastack opened a new pull request #4370: NIFI-6128 UnpackContent: Store unpacked file data

2020-06-30 Thread GitBox


Wastack opened a new pull request #4370:
URL: https://github.com/apache/nifi/pull/4370


   Tar format allows us to archive files with their original permission,
   owner, group name and last modification time.
   
   When unpacking with Tar unpacker, these information are stored in new
   attributes with names: "file.inner.*". This way, it preserves backward
   compatibility when using parallel with GetFile processor (which stores
   information in "file.*").
   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   _Enables X functionality; fixes bug NIFI-._
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [X] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-30 Thread GitBox


arpadboda closed pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7578) nifi-toolkit CLI Process Group Create command

2020-06-30 Thread Bryan Bende (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Bende updated NIFI-7578:
--
Fix Version/s: (was: 1.11.4)
   1.12.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> nifi-toolkit CLI Process Group Create command
> -
>
> Key: NIFI-7578
> URL: https://issues.apache.org/jira/browse/NIFI-7578
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Affects Versions: 1.11.4
>Reporter: Javi Roman
>Assignee: Javi Roman
>Priority: Major
>  Labels: nifi-toolkit
> Fix For: 1.12.0
>
> Attachments: nifi-cli-pg-create.png, ui-pg-teams.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The most common approach to user/team managed authorization is through the 
> use of unique Process Groups added to the Root Process Group. All this 
> approach can be done by means of NiFi CLI commands, for instance:
>  # Create Process Group (NiFi UI): Team1
>  # bin/cli.sh nifi create-user username-for-team1
>  # bin/cli.sh nifi create-user-group -ugn Team1
>  # bin/cli.sh nifi update-user-group -ugn Team1 -uil 
> dcea37eb-0172-1000-d387-83441fa6fafc
>  # bin/cli.sh nifi update-policy -gnl Team1 -poa read -por /flow  and so 
> on for policies.
> The only UI made step in this user/team approach is the creation of the 
> Process Group from the root PG.
> The idea is create a new command in the CLI:
> bin/cli.sh nifi pg-create -pgn Team1
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7578) nifi-toolkit CLI Process Group Create command

2020-06-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148689#comment-17148689
 ] 

ASF subversion and git services commented on NIFI-7578:
---

Commit c221e4934d0d4be3215e9766c93c85472f91aef2 in nifi's branch 
refs/heads/master from Javi Roman
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=c221e49 ]

NIFI-7578 nifi-toolkit CLI Process Group Create command
- Remove unused imports
- Fix checkstyle errors

This closes #4358.


> nifi-toolkit CLI Process Group Create command
> -
>
> Key: NIFI-7578
> URL: https://issues.apache.org/jira/browse/NIFI-7578
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Tools and Build
>Affects Versions: 1.11.4
>Reporter: Javi Roman
>Assignee: Javi Roman
>Priority: Major
>  Labels: nifi-toolkit
> Fix For: 1.11.4
>
> Attachments: nifi-cli-pg-create.png, ui-pg-teams.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The most common approach to user/team managed authorization is through the 
> use of unique Process Groups added to the Root Process Group. All this 
> approach can be done by means of NiFi CLI commands, for instance:
>  # Create Process Group (NiFi UI): Team1
>  # bin/cli.sh nifi create-user username-for-team1
>  # bin/cli.sh nifi create-user-group -ugn Team1
>  # bin/cli.sh nifi update-user-group -ugn Team1 -uil 
> dcea37eb-0172-1000-d387-83441fa6fafc
>  # bin/cli.sh nifi update-policy -gnl Team1 -poa read -por /flow  and so 
> on for policies.
> The only UI made step in this user/team approach is the creation of the 
> Process Group from the root PG.
> The idea is create a new command in the CLI:
> bin/cli.sh nifi pg-create -pgn Team1
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4358: NIFI-7578 nifi-toolkit CLI Process Group Create command

2020-06-30 Thread GitBox


asfgit closed pull request #4358:
URL: https://github.com/apache/nifi/pull/4358


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] bbende commented on pull request #4358: NIFI-7578 nifi-toolkit CLI Process Group Create command

2020-06-30 Thread GitBox


bbende commented on pull request #4358:
URL: https://github.com/apache/nifi/pull/4358#issuecomment-651803038


   Looks good, merged to master, thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-7586) Add socket-level timeout properties for CassandraSessionProvider

2020-06-30 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi updated NIFI-7586:
--
Component/s: Extensions

> Add socket-level timeout properties for CassandraSessionProvider
> 
>
> Key: NIFI-7586
> URL: https://issues.apache.org/jira/browse/NIFI-7586
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The DataStax library used by NiFi to connect to Cassandra would allow the 
> setting of socket level read timeout and connect timeout but NiFi doesn't 
> expose them as properties or any other way.
> The default values are a couple of seconds which is probably enough most of 
> the time but not always.
> We should allow the users to provide their own configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7586) Add socket-level timeout properties for CassandraSessionProvider

2020-06-30 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi resolved NIFI-7586.
---
Fix Version/s: 1.12.0
   Resolution: Fixed

> Add socket-level timeout properties for CassandraSessionProvider
> 
>
> Key: NIFI-7586
> URL: https://issues.apache.org/jira/browse/NIFI-7586
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The DataStax library used by NiFi to connect to Cassandra would allow the 
> setting of socket level read timeout and connect timeout but NiFi doesn't 
> expose them as properties or any other way.
> The default values are a couple of seconds which is probably enough most of 
> the time but not always.
> We should allow the users to provide their own configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7586) Add socket-level timeout properties for CassandraSessionProvider

2020-06-30 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148669#comment-17148669
 ] 

ASF subversion and git services commented on NIFI-7586:
---

Commit c2f46c44ca29a07a0f418f6d46845f7ae7bccf91 in nifi's branch 
refs/heads/master from Tamas Palfy
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=c2f46c4 ]

NIFI-7586 In CassandraSesionProvider added properties to set socket-level read 
timeout and connect timeout.

In QueryCassandra when writing flowfile to the sesion it's done on the raw 
OutputStream.
Wrapped it in a BufferedOutputStream for better performance.

This closes #4368.

Signed-off-by: Peter Turcsanyi 


> Add socket-level timeout properties for CassandraSessionProvider
> 
>
> Key: NIFI-7586
> URL: https://issues.apache.org/jira/browse/NIFI-7586
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The DataStax library used by NiFi to connect to Cassandra would allow the 
> setting of socket level read timeout and connect timeout but NiFi doesn't 
> expose them as properties or any other way.
> The default values are a couple of seconds which is probably enough most of 
> the time but not always.
> We should allow the users to provide their own configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #4368: NIFI-7586 Add socket-level timeout properties for CassandraSessionProvider

2020-06-30 Thread GitBox


asfgit closed pull request #4368:
URL: https://github.com/apache/nifi/pull/4368


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-30 Thread GitBox


adamdebreceni commented on pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#issuecomment-651786495


   in the nanofi lib, there was a macro called `SUCCESS` which collides with 
the `CachedValueValidator::Result::SUCCESS`, and this only came out when 
building with python enabled



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] javiroman commented on pull request #4358: NIFI-7578 nifi-toolkit CLI Process Group Create command

2020-06-30 Thread GitBox


javiroman commented on pull request #4358:
URL: https://github.com/apache/nifi/pull/4358#issuecomment-651786555


   mvn -P contrib-check clean install
   from the command line (out of IDEA IDE)  raised the style checker. Any kind 
of configuration of my IDEA IDE is avoiding the correct behaviour. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] bbende commented on pull request #4358: NIFI-7578 nifi-toolkit CLI Process Group Create command

2020-06-30 Thread GitBox


bbende commented on pull request #4358:
URL: https://github.com/apache/nifi/pull/4358#issuecomment-651772910


   Not sure if this is the reason, but try running "install" instead of 
"compile".
   
   Here was the output for me:
   ```
   [INFO] --- maven-checkstyle-plugin:3.1.0:check (check-style) @ 
nifi-toolkit-cli ---
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[19,8]
 (imports) UnusedImports: Unused import - org.apache.commons.lang3.StringUtils.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[29,8]
 (imports) UnusedImports: Unused import - 
org.apache.nifi.toolkit.cli.impl.result.nifi.ProcessGroupsResult.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[31,8]
 (imports) UnusedImports: Unused import - org.apache.nifi.web.api.dto.UserDTO.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[32,8]
 (imports) UnusedImports: Unused import - 
org.apache.nifi.web.api.dto.flow.FlowDTO.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[33,8]
 (imports) UnusedImports: Unused import - 
org.apache.nifi.web.api.dto.flow.ProcessGroupFlowDTO.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[34,8]
 (imports) UnusedImports: Unused import - 
org.apache.nifi.web.api.entity.ControllerServiceEntity.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[36,8]
 (imports) UnusedImports: Unused import - 
org.apache.nifi.web.api.entity.ProcessGroupFlowEntity.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[37,8]
 (imports) UnusedImports: Unused import - 
org.apache.nifi.web.api.entity.UserEntity.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[40,8]
 (imports) UnusedImports: Unused import - java.util.ArrayList.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[41,8]
 (imports) UnusedImports: Unused import - java.util.List.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[49,23]
 (blocks) LeftCurly: '{' at column 23 should have line break after.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[52,36]
 (blocks) LeftCurly: '{' at column 36 should have line break after.
   [WARNING] 
src/main/java/org/apache/nifi/toolkit/cli/impl/command/nifi/pg/PGCreate.java:[55,50]
 (blocks) LeftCurly: '{' at column 50 should have line break after.
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on pull request #797: MINIFICPP-1231 - General property validation + use them in MergeContent.

2020-06-30 Thread GitBox


szaszm commented on pull request #797:
URL: https://github.com/apache/nifi-minifi-cpp/pull/797#issuecomment-651769183


   What was the failure that made d5a4bb4 necessary? I didn't see failures on 
travis, and I don't see how it changes behavior in a way that could fix a build 
failure.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] tpalfy commented on pull request #4368: NIFI-7586 Add socket-level timeout properties for CassandraSessionProvider

2020-06-30 Thread GitBox


tpalfy commented on pull request #4368:
URL: https://github.com/apache/nifi/pull/4368#issuecomment-651762708


   @turcsanyip 
   Thanks for finding this. It turns out there's been a bug in 
CassandraSessionProvider for a long time now.
   Not related to this change so I opened a new ticket and will fix it in a 
subsequent PR: https://issues.apache.org/jira/browse/NIFI-7590



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-7590) CassandraSessionProvider breaks after disable + re-enable

2020-06-30 Thread Tamas Palfy (Jira)
Tamas Palfy created NIFI-7590:
-

 Summary: CassandraSessionProvider breaks after disable + re-enable
 Key: NIFI-7590
 URL: https://issues.apache.org/jira/browse/NIFI-7590
 Project: Apache NiFi
  Issue Type: Improvement
Reporter: Tamas Palfy


If Cassandra processors are using CassandraSessionProvider service and the 
service is disabled and then re-enabled (typically when one want's to edit it's 
properties), the service cannot connect to Cassandra any longer and the 
processor keeps failing.

Currently the only way to fix this is to restart NiFi.

The root cause is a bug in the @OnDisabled and @OnEnabled:

{code:java}
@OnDisabled
public void onDisabled(){
if (cassandraSession != null) {
cassandraSession.close();
}
if (cluster != null) {
cluster.close();
}
}

@OnEnabled
public void onEnabled(final ConfigurationContext context) {
connectToCassandra(context);
}

private void connectToCassandra(ConfigurationContext context) {
if (cluster == null) {
...
{code}

In @OnDisabled, cluster is _closed_ but _not set to null_.
In @OnEnabled, it is created _only if it is null_.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] turcsanyip edited a comment on pull request #4368: NIFI-7586 Add socket-level timeout properties for CassandraSessionProvider

2020-06-30 Thread GitBox


turcsanyip edited a comment on pull request #4368:
URL: https://github.com/apache/nifi/pull/4368#issuecomment-651396020


   After changing the timeouts, I get the following errors from QueryCassandra 
/ PutCassandraQL:
   ```
   ERROR [Timer-Driven Process Thread-10] o.a.n.p.cassandra.QueryCassandra 
QueryCassandra[id=01ad5a7c-0173-1000-7fb3-a34af5274655] No host in the 
Cassandra cluster can be contacted successfully to execute this query: 
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried 
for query failed (no host was tried)
   ```
   ```
   com.datastax.driver.core.exceptions.DriverInternalError: Unexpected 
exception thrown
   ```
   
   Did it work for you?
   
   Update:
   Seems to be an old issue.
   Can be replicated on the current master:
   - set up QueryCassandra with CassandraSessionProvider and start them
   - stop QueryCassandra
   - disable CassandraSessionProvider
   - enable CassandraSessionProvider (no config change needed)
   - start QueryCassandra => error
   
   
   The new properties look good to me and work properly.
   Merging to master...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gardellajuanpablo commented on a change in pull request #4352: NIFI-7563 Optimize the usage of JMS sessions and message producers

2020-06-30 Thread GitBox


gardellajuanpablo commented on a change in pull request #4352:
URL: https://github.com/apache/nifi/pull/4352#discussion_r447610193



##
File path: 
nifi-nar-bundles/nifi-jms-bundle/nifi-jms-processors/src/test/java/org/apache/nifi/jms/processors/PublishJMSIT.java
##
@@ -385,6 +390,66 @@ protected TcpTransport createTcpTransport(WireFormat wf, 
SocketFactory socketFac
 }
 }
 
+/**
+ * 
+ * This test validates the optimal resources usage. To process one message 
is expected to create only one connection, one session and one message producer.
+ * 
+ * 
+ * See https://issues.apache.org/jira/browse/NIFI-7563 for 
details.
+ * 
+ * @throws Exception any error related to the broker.
+ */
+@Test(timeout = 1)
+public void validateNIFI7563() throws Exception {
+BrokerService broker = new BrokerService();
+try {
+broker.setPersistent(false);
+TransportConnector connector = 
broker.addConnector("tcp://127.0.0.1:0");
+int port = connector.getServer().getSocketAddress().getPort();
+broker.start();
+
+final ActiveMQConnectionFactory innerCf = new 
ActiveMQConnectionFactory("tcp://127.0.0.1:" + port);
+ConnectionFactoryInvocationHandler connectionFactoryProxy = new 
ConnectionFactoryInvocationHandler(innerCf);
+
+// Create a connection Factory proxy to catch metrics and usage.
+ConnectionFactory cf = (ConnectionFactory) 
Proxy.newProxyInstance(ConnectionFactory.class.getClassLoader(), new Class[] { 
ConnectionFactory.class }, connectionFactoryProxy);
+
+TestRunner runner = TestRunners.newTestRunner(new PublishJMS());
+JMSConnectionFactoryProviderDefinition cs = 
mock(JMSConnectionFactoryProviderDefinition.class);
+when(cs.getIdentifier()).thenReturn("cfProvider");
+when(cs.getConnectionFactory()).thenReturn(cf);
+runner.addControllerService("cfProvider", cs);
+runner.enableControllerService(cs);
+
+runner.setProperty(PublishJMS.CF_SERVICE, "cfProvider");
+
+String destinationName = "myDestinationName";
+// The destination option according current implementation should 
contain topic or queue to infer the destination type
+// from the name. Check 
https://issues.apache.org/jira/browse/NIFI-7561. Once that is fixed, the name 
can be
+// randomly created.
+String topicNameInHeader = "topic-foo";
+runner.setProperty(PublishJMS.DESTINATION, destinationName);
+runner.setProperty(PublishJMS.DESTINATION_TYPE, PublishJMS.QUEUE);
+
+Map flowFileAttributes = new HashMap<>();
+// This method will be removed once 
https://issues.apache.org/jira/browse/NIFI-7564 is fixed.
+flowFileAttributes.put(JmsHeaders.DESTINATION, topicNameInHeader);
+flowFileAttributes.put(JmsHeaders.REPLY_TO, topicNameInHeader);
+runner.enqueue("hi".getBytes(), flowFileAttributes);
+runner.enqueue("h2".getBytes(), flowFileAttributes);
+runner.setThreadCount(1);

Review comment:
   Thanks for reviewing it. Actually the issue happens when a worker is 
reused. It can be reproduced with only one thread, but I can add another test 
to verify using **less than** (it could be possible that for N threads only M 
workers are created, when M < M). I will do.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen commented on pull request #3317: NIFI-6047 Add DetectDuplicateRecord Processor

2020-06-30 Thread GitBox


MikeThomsen commented on pull request #3317:
URL: https://github.com/apache/nifi/pull/3317#issuecomment-651704852


   @adamfisher you need to do a rebase off of master and force push that 
change. You brought in about 100 odd commits from other people by merging 
master.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #827: MINIFICPP-1273 - Drain connections on flow shutdown

2020-06-30 Thread GitBox


adamdebreceni commented on a change in pull request #827:
URL: https://github.com/apache/nifi-minifi-cpp/pull/827#discussion_r447575841



##
File path: libminifi/test/flow-tests/CMakeLists.txt
##
@@ -0,0 +1,31 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+file(GLOB FLOW_TESTS  "*.cpp")
+SET(FLOW_TEST_COUNT 0)
+FOREACH(testfile ${FLOW_TESTS})
+get_filename_component(testfilename "${testfile}" NAME_WE)
+add_executable("${testfilename}" "${testfile}")
+createTests("${testfilename}")
+target_link_libraries(${testfilename} ${CATCH_MAIN_LIB})
+target_wholearchive_library(${testfilename} minifi-standard-processors)
+MATH(EXPR FLOW_TEST_COUNT "${FLOW_TEST_COUNT}+1")
+add_test(NAME "${testfilename}" COMMAND "${testfilename}" 
WORKING_DIRECTORY ${TEST_DIR})
+ENDFOREACH()
+message("-- Finished building ${FLOW_TEST_COUNT} flow related test file(s)...")

Review comment:
   (it seems like the Enter puts too much stress on my pinky)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #827: MINIFICPP-1273 - Drain connections on flow shutdown

2020-06-30 Thread GitBox


adamdebreceni commented on a change in pull request #827:
URL: https://github.com/apache/nifi-minifi-cpp/pull/827#discussion_r447574816



##
File path: libminifi/test/flow-tests/CMakeLists.txt
##
@@ -0,0 +1,31 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+file(GLOB FLOW_TESTS  "*.cpp")
+SET(FLOW_TEST_COUNT 0)
+FOREACH(testfile ${FLOW_TESTS})
+get_filename_component(testfilename "${testfile}" NAME_WE)
+add_executable("${testfilename}" "${testfile}")
+createTests("${testfilename}")
+target_link_libraries(${testfilename} ${CATCH_MAIN_LIB})
+target_wholearchive_library(${testfilename} minifi-standard-processors)
+MATH(EXPR FLOW_TEST_COUNT "${FLOW_TEST_COUNT}+1")
+add_test(NAME "${testfilename}" COMMAND "${testfilename}" 
WORKING_DIRECTORY ${TEST_DIR})
+ENDFOREACH()
+message("-- Finished building ${FLOW_TEST_COUNT} flow related test file(s)...")

Review comment:
   done

##
File path: libminifi/src/core/ProcessGroup.cpp
##
@@ -403,6 +403,16 @@ void 
ProcessGroup::removeConnection(std::shared_ptr connection) {
   }
 }
 
+void ProcessGroup::drainConnections() {
+  for (auto &&connection : connections_) {
+connection->drain();
+  }
+
+  for (std::set::iterator it = child_process_groups_.begin(); 
it != child_process_groups_.end(); ++it) {
+(*it)->drainConnections();
+  }

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #821: MINIFICPP-1251 - Implement and test RetryFlowFile processor

2020-06-30 Thread GitBox


hunyadi-dev commented on a change in pull request #821:
URL: https://github.com/apache/nifi-minifi-cpp/pull/821#discussion_r447569405



##
File path: extensions/standard-processors/tests/unit/RetryFlowFileTests.cpp
##
@@ -0,0 +1,221 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define CATCH_CONFIG_MAIN
+
+#include 
+#include 
+#include 
+#include 
+
+#include "TestBase.h"
+
+#include "processors/GenerateFlowFile.h"
+#include "processors/UpdateAttribute.h"
+#include "processors/RetryFlowFile.h"
+#include "processors/PutFile.h"
+#include "processors/LogAttribute.h"
+#include "utils/file/FileUtils.h"
+#include "utils/OptionalUtils.h"
+#include "utils/TestUtils.h"
+
+namespace {
+using org::apache::nifi::minifi::utils::createTempDir;
+using org::apache::nifi::minifi::utils::optional;
+
+std::vector> list_dir_all(const 
std::string& dir, const std::shared_ptr& logger, bool 
recursive = true) {
+  return org::apache::nifi::minifi::utils::file::FileUtils::list_dir_all(dir, 
logger, recursive);
+}
+
+class RetryFlowFileTest {
+ public:
+  using Processor = org::apache::nifi::minifi::core::Processor;
+  using GenerateFlowFile = 
org::apache::nifi::minifi::processors::GenerateFlowFile;
+  using UpdateAttribute = 
org::apache::nifi::minifi::processors::UpdateAttribute;
+  using RetryFlowFile = org::apache::nifi::minifi::processors::RetryFlowFile;
+  using PutFile = org::apache::nifi::minifi::processors::PutFile;
+  using LogAttribute = org::apache::nifi::minifi::processors::LogAttribute;
+  RetryFlowFileTest() :
+logTestController_(LogTestController::getInstance()),
+
logger_(logging::LoggerFactory::getLogger())
 {
+reInitialize();
+  }
+  virtual ~RetryFlowFileTest() {
+logTestController_.reset();
+  }
+
+ protected:
+  void reInitialize() {
+testController_.reset(new TestController());
+plan_ = testController_->createPlan();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+logTestController_.setDebug();
+  }
+

Review comment:
   Thanks, sorry for using the word "rules allow it", I should have written 
"if others think there is added value in moving this to comment in the 
codebase".

##
File path: extensions/standard-processors/tests/unit/RetryFlowFileTests.cpp
##
@@ -0,0 +1,221 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#define CATCH_CONFIG_MAIN
+
+#include 
+#include 
+#include 
+#include 
+
+#include "TestBase.h"
+
+#include "processors/GenerateFlowFile.h"
+#include "processors/UpdateAttribute.h"
+#include "processors/RetryFlowFile.h"
+#include "processors/PutFile.h"
+#include "processors/LogAttribute.h"
+#include "utils/file/FileUtils.h"
+#include "utils/OptionalUtils.h"
+#include "utils/TestUtils.h"
+
+namespace {
+using org::apache::nifi::minifi::utils::createTempDir;
+using org::apache::nifi::minifi::utils::optional;
+
+std::vector> list_dir_all(const 
std::string& dir, const std::shared_ptr& logger, bool 
recursive = true) {
+  return org::apache::nifi::minifi::utils::file::FileUtils::list_dir_all(dir, 
logger, recursive);
+}
+
+class RetryFlowFileTest {
+ public:
+  using Processor = org::apache::nifi::minifi::core::Processor;
+ 

[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-30 Thread GitBox


szaszm commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447559056



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}
+  ff_repository->stop();
+}};
+
+ff_repository->onFlush_ = [&] {
+  if (++flush_counter != 1) {
+return;
+  }
+  
+  for (int keyIdx = 0; keyIdx < 100; ++keyIdx) {
+auto file = std::make_shared(ff_repository, 
nullptr);
+file->setUuidConnection(connection->getUUIDStr());
+// Serialize is sync
+file->Serialize();
+if (keyIdx % 2 == 0) {
+  // delete every second flowFile
+  ff_repository->Delete(file->getUUIDStr());
+}
+  }
+  stop = true;
+  // wait for the shutdown thread to start waiting for the worker thread
+  std::this_thread::sleep_for(std::chrono::milliseconds{100});
+};
+
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);
+ff_repository->loadComponent(nullptr);
+ff_repository->start();
+
+shutdown.join();
+  }
+
+  // check if the deleted flowfiles are indeed deleted
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);
+ff_repository->loadComponent(nullptr);
+ff_repository->start();
+std::this_thread::sleep_for(std::chrono::milliseconds{100});
+REQUIRE(connection->getQueueSize() == 50);
+  }
+}

Review comment:
   https://stackoverflow.com/a/729795/3997716
   
   Maybe it's more intuitive if you think about them as line-endings, not line 
separators.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #827: MINIFICPP-1273 - Drain connections on flow shutdown

2020-06-30 Thread GitBox


szaszm commented on a change in pull request #827:
URL: https://github.com/apache/nifi-minifi-cpp/pull/827#discussion_r447557059



##
File path: libminifi/test/flow-tests/CMakeLists.txt
##
@@ -0,0 +1,31 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+file(GLOB FLOW_TESTS  "*.cpp")
+SET(FLOW_TEST_COUNT 0)
+FOREACH(testfile ${FLOW_TESTS})
+get_filename_component(testfilename "${testfile}" NAME_WE)
+add_executable("${testfilename}" "${testfile}")
+createTests("${testfilename}")
+target_link_libraries(${testfilename} ${CATCH_MAIN_LIB})
+target_wholearchive_library(${testfilename} minifi-standard-processors)
+MATH(EXPR FLOW_TEST_COUNT "${FLOW_TEST_COUNT}+1")
+add_test(NAME "${testfilename}" COMMAND "${testfilename}" 
WORKING_DIRECTORY ${TEST_DIR})
+ENDFOREACH()
+message("-- Finished building ${FLOW_TEST_COUNT} flow related test file(s)...")

Review comment:
   The file should end with a newline character.
   
   
https://stackoverflow.com/questions/729692/why-should-text-files-end-with-a-newline





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-30 Thread GitBox


szaszm commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447559056



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}
+  ff_repository->stop();
+}};
+
+ff_repository->onFlush_ = [&] {
+  if (++flush_counter != 1) {
+return;
+  }
+  
+  for (int keyIdx = 0; keyIdx < 100; ++keyIdx) {
+auto file = std::make_shared(ff_repository, 
nullptr);
+file->setUuidConnection(connection->getUUIDStr());
+// Serialize is sync
+file->Serialize();
+if (keyIdx % 2 == 0) {
+  // delete every second flowFile
+  ff_repository->Delete(file->getUUIDStr());
+}
+  }
+  stop = true;
+  // wait for the shutdown thread to start waiting for the worker thread
+  std::this_thread::sleep_for(std::chrono::milliseconds{100});
+};
+
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);
+ff_repository->loadComponent(nullptr);
+ff_repository->start();
+
+shutdown.join();
+  }
+
+  // check if the deleted flowfiles are indeed deleted
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);
+ff_repository->loadComponent(nullptr);
+ff_repository->start();
+std::this_thread::sleep_for(std::chrono::milliseconds{100});
+REQUIRE(connection->getQueueSize() == 50);
+  }
+}

Review comment:
   https://stackoverflow.com/a/729795/3997716





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #827: MINIFICPP-1273 - Drain connections on flow shutdown

2020-06-30 Thread GitBox


szaszm commented on a change in pull request #827:
URL: https://github.com/apache/nifi-minifi-cpp/pull/827#discussion_r447555750



##
File path: libminifi/src/core/ProcessGroup.cpp
##
@@ -403,6 +403,16 @@ void 
ProcessGroup::removeConnection(std::shared_ptr connection) {
   }
 }
 
+void ProcessGroup::drainConnections() {
+  for (auto &&connection : connections_) {
+connection->drain();
+  }
+
+  for (std::set::iterator it = child_process_groups_.begin(); 
it != child_process_groups_.end(); ++it) {
+(*it)->drainConnections();
+  }

Review comment:
   This could become a range-based for loop.

##
File path: libminifi/test/flow-tests/CMakeLists.txt
##
@@ -0,0 +1,31 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+file(GLOB FLOW_TESTS  "*.cpp")
+SET(FLOW_TEST_COUNT 0)
+FOREACH(testfile ${FLOW_TESTS})
+get_filename_component(testfilename "${testfile}" NAME_WE)
+add_executable("${testfilename}" "${testfile}")
+createTests("${testfilename}")
+target_link_libraries(${testfilename} ${CATCH_MAIN_LIB})
+target_wholearchive_library(${testfilename} minifi-standard-processors)
+MATH(EXPR FLOW_TEST_COUNT "${FLOW_TEST_COUNT}+1")
+add_test(NAME "${testfilename}" COMMAND "${testfilename}" 
WORKING_DIRECTORY ${TEST_DIR})
+ENDFOREACH()
+message("-- Finished building ${FLOW_TEST_COUNT} flow related test file(s)...")

Review comment:
   The file should end with a newline character





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-30 Thread GitBox


szaszm commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447549997



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";

Review comment:
   1. I don't think we have a clear list. I think we just aim for a large 
GNU/Linux coverage.
   2. Not sure if it's the second best place. Normally we clean up temporary 
directories, but this doesn't happen when tests crash, so there may be 
leftovers in some rare situations. In a discussion around the time of fixing 
the referred bug, we considered disabling direct IO, but that would mean we 
don't test the same behavior that we are running, so going for a different 
directory seemed to be the better approach.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-30 Thread GitBox


szaszm commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447547205



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}
+  ff_repository->stop();
+}};
+
+ff_repository->onFlush_ = [&] {
+  if (++flush_counter != 1) {
+return;
+  }
+  
+  for (int keyIdx = 0; keyIdx < 100; ++keyIdx) {
+auto file = std::make_shared(ff_repository, 
nullptr);
+file->setUuidConnection(connection->getUUIDStr());
+// Serialize is sync
+file->Serialize();
+if (keyIdx % 2 == 0) {
+  // delete every second flowFile
+  ff_repository->Delete(file->getUUIDStr());
+}
+  }
+  stop = true;
+  // wait for the shutdown thread to start waiting for the worker thread
+  std::this_thread::sleep_for(std::chrono::milliseconds{100});
+};
+
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);

Review comment:
   I agree that returning a bool to signal error is not the best idea. I 
would prefer the use of exceptions for all errors that are not usually part of 
the program flow (i.e. exceptional).
   
   I'm not aware of a list of changes we are "holding back", but creating a 
Jira issue with Fix version = "1.0" could be one way of maintaining such a 
list, because we can search for those later.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #827: MINIFICPP-1273 - Drain connections on flow shutdown

2020-06-30 Thread GitBox


adamdebreceni opened a new pull request #827:
URL: https://github.com/apache/nifi-minifi-cpp/pull/827


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-30 Thread GitBox


adamdebreceni commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447465066



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}
+  ff_repository->stop();
+}};
+
+ff_repository->onFlush_ = [&] {
+  if (++flush_counter != 1) {
+return;
+  }
+  
+  for (int keyIdx = 0; keyIdx < 100; ++keyIdx) {
+auto file = std::make_shared(ff_repository, 
nullptr);
+file->setUuidConnection(connection->getUUIDStr());
+// Serialize is sync
+file->Serialize();
+if (keyIdx % 2 == 0) {
+  // delete every second flowFile
+  ff_repository->Delete(file->getUUIDStr());
+}
+  }
+  stop = true;
+  // wait for the shutdown thread to start waiting for the worker thread
+  std::this_thread::sleep_for(std::chrono::milliseconds{100});
+};
+
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);
+ff_repository->loadComponent(nullptr);
+ff_repository->start();
+
+shutdown.join();
+  }
+
+  // check if the deleted flowfiles are indeed deleted
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);
+ff_repository->loadComponent(nullptr);
+ff_repository->start();
+std::this_thread::sleep_for(std::chrono::milliseconds{100});
+REQUIRE(connection->getQueueSize() == 50);
+  }
+}

Review comment:
   done

##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-30 Thread GitBox


adamdebreceni commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447465005



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-30 Thread GitBox


adamdebreceni commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447464781



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}
+  ff_repository->stop();
+}};
+
+ff_repository->onFlush_ = [&] {
+  if (++flush_counter != 1) {
+return;
+  }
+  
+  for (int keyIdx = 0; keyIdx < 100; ++keyIdx) {
+auto file = std::make_shared(ff_repository, 
nullptr);
+file->setUuidConnection(connection->getUUIDStr());
+// Serialize is sync
+file->Serialize();
+if (keyIdx % 2 == 0) {
+  // delete every second flowFile
+  ff_repository->Delete(file->getUUIDStr());
+}
+  }
+  stop = true;
+  // wait for the shutdown thread to start waiting for the worker thread
+  std::this_thread::sleep_for(std::chrono::milliseconds{100});
+};
+
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);

Review comment:
   done
   
   agree with the fail fast philosophy, what I don't agree is 
`Repository::initialize` returning `false` on failure, I don't see how is the 
user expected to handle it, of the two places it is called, one doesn't check 
the return value and one terminates the application
   
   do we have a list of changes we were holding back on, until a new major 
release?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-30 Thread GitBox


adamdebreceni commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447461792



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;

Review comment:
   done

##
File path: libminifi/src/FlowFileRecord.cpp
##
@@ -366,7 +366,7 @@ bool FlowFileRecord::DeSerialize(const uint8_t *buffer, 
const int bufferSize) {
 return false;
   }
 
-  if (nullptr == claim_) {

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #826: MINIFICPP-1274 - Commit delete operation before shutdown

2020-06-30 Thread GitBox


adamdebreceni commented on a change in pull request #826:
URL: https://github.com/apache/nifi-minifi-cpp/pull/826#discussion_r447454207



##
File path: libminifi/test/rocksdb-tests/RepoTests.cpp
##
@@ -326,3 +326,82 @@ TEST_CASE("Test FlowFile Restore", "[TestFFR6]") {
 
   LogTestController::getInstance().reset();
 }
+
+TEST_CASE("Flush deleted flowfiles before shutdown", "[TestFFR7]") {
+  using ConnectionMap = std::map>;
+
+  class TestFlowFileRepository: public core::repository::FlowFileRepository{
+   public:
+explicit TestFlowFileRepository(const std::string& name)
+: core::SerializableComponent(name),
+  FlowFileRepository(name, FLOWFILE_REPOSITORY_DIRECTORY, 
MAX_FLOWFILE_REPOSITORY_ENTRY_LIFE_TIME,
+ MAX_FLOWFILE_REPOSITORY_STORAGE_SIZE, 1) {}
+void flush() override {
+  FlowFileRepository::flush();
+  if (onFlush_) {
+onFlush_();
+  }
+}
+std::function onFlush_;
+  };
+
+  TestController testController;
+  char format[] = "/tmp/testRepo.XX";
+  auto dir = testController.createTempDirectory(format);
+
+  auto config = std::make_shared();
+  config->set(minifi::Configure::nifi_flowfile_repository_directory_default, 
utils::file::FileUtils::concat_path(dir, "flowfile_repository"));
+
+  auto connection = std::make_shared(nullptr, nullptr, 
"Connection");
+  ConnectionMap connectionMap{{connection->getUUIDStr(), connection}};
+  // initialize repository
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+
+std::atomic flush_counter{0};
+
+std::atomic stop{false};
+std::thread shutdown{[&] {
+  while (!stop.load()) {}
+  ff_repository->stop();
+}};
+
+ff_repository->onFlush_ = [&] {
+  if (++flush_counter != 1) {
+return;
+  }
+  
+  for (int keyIdx = 0; keyIdx < 100; ++keyIdx) {
+auto file = std::make_shared(ff_repository, 
nullptr);
+file->setUuidConnection(connection->getUUIDStr());
+// Serialize is sync
+file->Serialize();
+if (keyIdx % 2 == 0) {
+  // delete every second flowFile
+  ff_repository->Delete(file->getUUIDStr());
+}
+  }
+  stop = true;
+  // wait for the shutdown thread to start waiting for the worker thread
+  std::this_thread::sleep_for(std::chrono::milliseconds{100});
+};
+
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);
+ff_repository->loadComponent(nullptr);
+ff_repository->start();
+
+shutdown.join();
+  }
+
+  // check if the deleted flowfiles are indeed deleted
+  {
+std::shared_ptr ff_repository = 
std::make_shared("flowFileRepository");
+ff_repository->setConnectionMap(connectionMap);
+ff_repository->initialize(config);
+ff_repository->loadComponent(nullptr);
+ff_repository->start();
+std::this_thread::sleep_for(std::chrono::milliseconds{100});
+REQUIRE(connection->getQueueSize() == 50);
+  }
+}

Review comment:
   will add, but why do we do this?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org