[GitHub] [nifi] gresockj commented on a change in pull request #4691: NIFI-7990 add properties to map Record field as @timestamp in output …

2021-10-06 Thread GitBox


gresockj commented on a change in pull request #4691:
URL: https://github.com/apache/nifi/pull/4691#discussion_r723795065



##
File path: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
##
@@ -209,6 +207,25 @@
 .required(true)
 .build();
 
+static final PropertyDescriptor AT_TIMESTAMP = new 
PropertyDescriptor.Builder()
+.name("put-es-record-at-timestamp")
+.displayName("@timestamp Value")
+.description("The value to use as the @timestamp field (required 
for Elasticsearch Data Streams)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+.build();
+
+static final PropertyDescriptor AT_TIMESTAMP_RECORD_PATH = new 
PropertyDescriptor.Builder()
+.name("put-es-record-at-timestamp-path")
+.displayName("@timestamp Record Path")
+.description("A RecordPath pointing to a field in the record(s) 
that contains the @timestamp for the document " +
+"(required for Elasticsearch Data Streams). If left blank 
the @timestamp will be determined using the main property type")

Review comment:
   Yep, I see the update in `PutElasticsearchRecord`, which does clarify 
it.  Can you apply the same description in  `PutElasticsearchHttpRecord`?

##
File path: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
##
@@ -209,6 +207,25 @@
 .required(true)
 .build();
 
+static final PropertyDescriptor AT_TIMESTAMP = new 
PropertyDescriptor.Builder()
+.name("put-es-record-at-timestamp")
+.displayName("@timestamp Value")
+.description("The value to use as the @timestamp field (required 
for Elasticsearch Data Streams)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+.build();
+
+static final PropertyDescriptor AT_TIMESTAMP_RECORD_PATH = new 
PropertyDescriptor.Builder()
+.name("put-es-record-at-timestamp-path")
+.displayName("@timestamp Record Path")
+.description("A RecordPath pointing to a field in the record(s) 
that contains the @timestamp for the document " +
+"(required for Elasticsearch Data Streams). If left blank 
the @timestamp will be determined using the main property type")
+.required(false)
+.addValidator(new RecordPathValidator())
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();

Review comment:
   The property descriptions in `PutElasticsearchRecords` are clear to me 
now, let's just bring the descriptions over to `PutElasticsearchHttpRecord`.

##
File path: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
##
@@ -266,9 +283,11 @@
 descriptors.add(RECORD_WRITER);
 descriptors.add(LOG_ALL_ERRORS);
 descriptors.add(ID_RECORD_PATH);
+descriptors.add(AT_TIMESTAMP_RECORD_PATH);
 descriptors.add(INDEX);
 descriptors.add(TYPE);
 descriptors.add(INDEX_OP);
+descriptors.add(AT_TIMESTAMP);

Review comment:
   What you're describing sounds like the `PutElasticsearchRecord` 
processor, whose property order does look natural to me.  Here the order seems 
different, with the record path properties both together, followed by the 
index/type/op/@timestamp properties.  It seems to me that `AT_TIMESTAMP` could 
be moved to just below `AT_TIMESTAMP_RECORD_PATH` and still keep the same 
grouping -- does that make sense?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] exceptionfactory commented on pull request #5443: NIFI-9285 Upgrade ZooKeeper transitive version to 3.4.14

2021-10-06 Thread GitBox


exceptionfactory commented on pull request #5443:
URL: https://github.com/apache/nifi/pull/5443#issuecomment-937385209


   Thanks for the review @gresockj! Rebased and pushed to resolve the conflicts.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] ChrisSamo632 commented on pull request #4691: NIFI-7990 add properties to map Record field as @timestamp in output …

2021-10-06 Thread GitBox


ChrisSamo632 commented on pull request #4691:
URL: https://github.com/apache/nifi/pull/4691#issuecomment-937314291


   > I noticed that using the NIFI-7990.json flow definition, using 
`PutElasticsearchHttpRecord`, `@timestamp` for record "1" appears to be a 
numeric type but for record "2" it appears to be a string type:
   > ...
   > However, using `PutElasticsearchRecord`, `@timestamp` is consistently a 
numeric type:
   > ...
   > I don't know if this is actually a problem in ES, but wanted to point it 
out in case we need to update the @timestamp code to make it consistent in the 
two cases.
   
   Good spot! I think this was likely due to the `PutElasticsearchHttpRecord` 
processor **not** attempting to coerce the `@timestamp` field value if it had 
been provided via the direct `@timestamp` property _or_ from a Record Path 
field that was of DataType STRING. I intentionally changed 
`PutElasticsearchRecord` to `coerceStringToLong` - basically this is an 
addition to a [question I previously asked myself on this 
PR](https://github.com/apache/nifi/pull/4691#discussion_r532728519)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] ChrisSamo632 commented on a change in pull request #4691: NIFI-7990 add properties to map Record field as @timestamp in output …

2021-10-06 Thread GitBox


ChrisSamo632 commented on a change in pull request #4691:
URL: https://github.com/apache/nifi/pull/4691#discussion_r723728282



##
File path: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
##
@@ -405,11 +424,17 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 
 this.nullSuppression = context.getProperty(SUPPRESS_NULLS).getValue();
 
-final String id_path = 
context.getProperty(ID_RECORD_PATH).evaluateAttributeExpressions(flowFile).getValue();
-final RecordPath recordPath = StringUtils.isEmpty(id_path) ? null : 
recordPathCache.getCompiled(id_path);
+final String idPath = 
context.getProperty(ID_RECORD_PATH).evaluateAttributeExpressions(flowFile).getValue();
+final RecordPath recordPath = StringUtils.isEmpty(idPath) ? null : 
recordPathCache.getCompiled(idPath);
 final StringBuilder sb = new StringBuilder();
 final Charset charset = 
Charset.forName(context.getProperty(CHARSET).evaluateAttributeExpressions(flowFile).getValue());
 
+final String atTimestamp = 
context.getProperty(AT_TIMESTAMP).evaluateAttributeExpressions(flowFile).getValue();
+final String atTimestampPath = 
context.getProperty(AT_TIMESTAMP_RECORD_PATH).isSet()
+? 
context.getProperty(AT_TIMESTAMP_RECORD_PATH).evaluateAttributeExpressions(flowFile).getValue()
+: null;

Review comment:
   I just copied the existing code for the other property values to be 
fair, but I think you could be right and all the code like this could be 
simplified




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] ChrisSamo632 commented on a change in pull request #4691: NIFI-7990 add properties to map Record field as @timestamp in output …

2021-10-06 Thread GitBox


ChrisSamo632 commented on a change in pull request #4691:
URL: https://github.com/apache/nifi/pull/4691#discussion_r723727794



##
File path: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
##
@@ -209,6 +207,25 @@
 .required(true)
 .build();
 
+static final PropertyDescriptor AT_TIMESTAMP = new 
PropertyDescriptor.Builder()
+.name("put-es-record-at-timestamp")
+.displayName("@timestamp Value")
+.description("The value to use as the @timestamp field (required 
for Elasticsearch Data Streams)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+.build();
+
+static final PropertyDescriptor AT_TIMESTAMP_RECORD_PATH = new 
PropertyDescriptor.Builder()
+.name("put-es-record-at-timestamp-path")
+.displayName("@timestamp Record Path")
+.description("A RecordPath pointing to a field in the record(s) 
that contains the @timestamp for the document " +
+"(required for Elasticsearch Data Streams). If left blank 
the @timestamp will be determined using the main property type")

Review comment:
   Copy & paste error here I think... tried to keep the property 
descriptions more or less the same, but think I messed something up a little - 
updated to match the existing Index/Type properties, does this make more sense?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] ChrisSamo632 commented on a change in pull request #4691: NIFI-7990 add properties to map Record field as @timestamp in output …

2021-10-06 Thread GitBox


ChrisSamo632 commented on a change in pull request #4691:
URL: https://github.com/apache/nifi/pull/4691#discussion_r723727411



##
File path: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
##
@@ -209,6 +207,25 @@
 .required(true)
 .build();
 
+static final PropertyDescriptor AT_TIMESTAMP = new 
PropertyDescriptor.Builder()
+.name("put-es-record-at-timestamp")
+.displayName("@timestamp Value")
+.description("The value to use as the @timestamp field (required 
for Elasticsearch Data Streams)")
+.required(false)
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.addValidator(StandardValidators.NON_EMPTY_EL_VALIDATOR)
+.build();
+
+static final PropertyDescriptor AT_TIMESTAMP_RECORD_PATH = new 
PropertyDescriptor.Builder()
+.name("put-es-record-at-timestamp-path")
+.displayName("@timestamp Record Path")
+.description("A RecordPath pointing to a field in the record(s) 
that contains the @timestamp for the document " +
+"(required for Elasticsearch Data Streams). If left blank 
the @timestamp will be determined using the main property type")
+.required(false)
+.addValidator(new RecordPathValidator())
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.build();

Review comment:
   Not quite sure I follow what you mean, but I've re-worded the property 
description to be more inline with the other existing properties on the 
processor, does it make more sense now?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] ChrisSamo632 commented on a change in pull request #4691: NIFI-7990 add properties to map Record field as @timestamp in output …

2021-10-06 Thread GitBox


ChrisSamo632 commented on a change in pull request #4691:
URL: https://github.com/apache/nifi/pull/4691#discussion_r723727008



##
File path: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
##
@@ -266,9 +283,11 @@
 descriptors.add(RECORD_WRITER);
 descriptors.add(LOG_ALL_ERRORS);
 descriptors.add(ID_RECORD_PATH);
+descriptors.add(AT_TIMESTAMP_RECORD_PATH);
 descriptors.add(INDEX);
 descriptors.add(TYPE);
 descriptors.add(INDEX_OP);
+descriptors.add(AT_TIMESTAMP);

Review comment:
   I was following the existing pattern of properties - the direct value 
properties (e.g. index, type) all appear together at the top of the processor; 
the Record Path lookup properties then appear in a separate group later on
   
   We could re-organise all the properties, but I didn't really want to do that 
initially at least
   
   What do you think?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Lehel44 opened a new pull request #5446: NIFI-9277: Add Record Reader and Writer to ListenHTTP

2021-10-06 Thread GitBox


Lehel44 opened a new pull request #5446:
URL: https://github.com/apache/nifi/pull/5446


   
   Thank you for submitting a contribution to Apache NiFi.
   
   Please provide a short description of the PR here:
   
    Description of PR
   
   Added record reader and writer to ListenHTTP processor:
   
   - Added new record reading/writing properties to ListenHTTP
   - Added validation to ListenHTTP
   - Added Record Reader and Writer to ListenHTTPServlet
   - In case of using an unpacakger and multipart request the record processing 
is not used
   - Added unit test to TestListenHTTP
   
   JIRA:
   
   https://issues.apache.org/jira/browse/NIFI-9277
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [ ] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-9288) UI - Provide UX to support Verifying Configuration

2021-10-06 Thread Matt Gilman (Jira)
Matt Gilman created NIFI-9288:
-

 Summary: UI - Provide UX to support Verifying Configuration
 Key: NIFI-9288
 URL: https://issues.apache.org/jira/browse/NIFI-9288
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Core UI
Reporter: Matt Gilman
Assignee: Matt Gilman


NIFI-9009 introduced the ability to verify the proposed configuration and 
performed validity checks beyond what is supported in property validation. This 
JIRA is to build the UI for invoking this verification and displaying the 
verification results.
 * The front end will allow the user to validate the proposed configuration 
without Applying.
 * The back end will provide a listing of affected FlowFile attributes.
 * The front end will prompt the user to supply FlowFile attribute values.
 * The front end will display validation results to the user.
 * The front end will allow the user to make changes to the Property values to 
address the validation results.
 * The front end will allow the user to re-validate based on new Property 
values using previously entered FlowFile attributes and values.
 * Once the configuration dialog is closed, subsequent validation attempts 
would require the FlowFiles attribute values to be re-entered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] Lehel44 commented on a change in pull request #5356: NIFI-9183: Add a command-line option to save status history

2021-10-06 Thread GitBox


Lehel44 commented on a change in pull request #5356:
URL: https://github.com/apache/nifi/pull/5356#discussion_r723626168



##
File path: nifi-bootstrap/src/main/java/org/apache/nifi/bootstrap/RunNiFi.java
##
@@ -250,18 +252,13 @@ public static void main(String[] args) throws 
IOException, InterruptedException
 }
 dumpFile = new File(args[2]);
 } else {
-try {
-Paths.get(args[1]);
-} catch (InvalidPathException e) {
-System.err.println("Invalid filename. The command 
parameters are: status-history  ");
+final boolean isValid = DumpFileValidator.validate(args[1]);

Review comment:
   Yes, I missed that. Thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (NIFI-9287) nifi.content.repository.directory.default* has (potentially) incorrect formatting in admin guide

2021-10-06 Thread Alasdair Brown (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alasdair Brown closed NIFI-9287.


> nifi.content.repository.directory.default* has (potentially) incorrect 
> formatting in admin guide
> 
>
> Key: NIFI-9287
> URL: https://issues.apache.org/jira/browse/NIFI-9287
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Affects Versions: 1.14.0
>Reporter: Alasdair Brown
>Assignee: Alasdair Brown
>Priority: Trivial
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-9287) nifi.content.repository.directory.default* has (potentially) incorrect formatting in admin guide

2021-10-06 Thread Alasdair Brown (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alasdair Brown resolved NIFI-9287.
--
Resolution: Invalid

Issue only apparent in Asciidoctor preview, not in the rendered html guide

> nifi.content.repository.directory.default* has (potentially) incorrect 
> formatting in admin guide
> 
>
> Key: NIFI-9287
> URL: https://issues.apache.org/jira/browse/NIFI-9287
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Affects Versions: 1.14.0
>Reporter: Alasdair Brown
>Assignee: Alasdair Brown
>Priority: Trivial
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9287) nifi.content.repository.directory.default* has (potentially) incorrect formatting in admin guide

2021-10-06 Thread Alasdair Brown (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alasdair Brown updated NIFI-9287:
-
Description: (was: Some lines contain just a `+` which is used for new 
lines, but without a space infront of the +, this isn't always interpreted as a 
new line - 

 
{code:java}
|`nifi.content.repository.directory.default`*|The location of the Content 
Repository. The default value is `./content_repository`. ++*NOTE*: Multiple 
content repositories can be specified by using the 
`nifi.content.repository.directory.` prefix with unique suffixes and separate 
paths as values. ++For example, to provide two additional locations to act as 
part of the content repository, a user could also specify additional properties 
with keys of: ++`nifi.content.repository.directory.content1=/repos/content1` 
+`nifi.content.repository.directory.content2=/repos/content2` ++Providing three 
total locations, including  `nifi.content.repository.directory.default`.{code})

> nifi.content.repository.directory.default* has (potentially) incorrect 
> formatting in admin guide
> 
>
> Key: NIFI-9287
> URL: https://issues.apache.org/jira/browse/NIFI-9287
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Affects Versions: 1.14.0
>Reporter: Alasdair Brown
>Assignee: Alasdair Brown
>Priority: Trivial
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9287) nifi.content.repository.directory.default* has (potentially) incorrect formatting in admin guide

2021-10-06 Thread Alasdair Brown (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alasdair Brown updated NIFI-9287:
-
Summary: nifi.content.repository.directory.default* has (potentially) 
incorrect formatting in admin guide  (was: 
nifi.content.repository.directory.default* has incorrect formatting in admin 
guide)

> nifi.content.repository.directory.default* has (potentially) incorrect 
> formatting in admin guide
> 
>
> Key: NIFI-9287
> URL: https://issues.apache.org/jira/browse/NIFI-9287
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Affects Versions: 1.14.0
>Reporter: Alasdair Brown
>Assignee: Alasdair Brown
>Priority: Trivial
>
> Some lines contain just a `+` which is used for new lines, but without a 
> space infront of the +, this isn't always interpreted as a new line - 
>  
> {code:java}
> |`nifi.content.repository.directory.default`*|The location of the Content 
> Repository. The default value is `./content_repository`. ++*NOTE*: Multiple 
> content repositories can be specified by using the 
> `nifi.content.repository.directory.` prefix with unique suffixes and separate 
> paths as values. ++For example, to provide two additional locations to act as 
> part of the content repository, a user could also specify additional 
> properties with keys of: 
> ++`nifi.content.repository.directory.content1=/repos/content1` 
> +`nifi.content.repository.directory.content2=/repos/content2` ++Providing 
> three total locations, including  
> `nifi.content.repository.directory.default`.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9287) nifi.content.repository.directory.default* has incorrect formatting in admin guide

2021-10-06 Thread Alasdair Brown (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alasdair Brown updated NIFI-9287:
-
Description: 
Some lines contain just a `+` which is used for new lines, but without a space 
infront of the +, this isn't always interpreted as a new line - 

 
{code:java}
|`nifi.content.repository.directory.default`*|The location of the Content 
Repository. The default value is `./content_repository`. ++*NOTE*: Multiple 
content repositories can be specified by using the 
`nifi.content.repository.directory.` prefix with unique suffixes and separate 
paths as values. ++For example, to provide two additional locations to act as 
part of the content repository, a user could also specify additional properties 
with keys of: ++`nifi.content.repository.directory.content1=/repos/content1` 
+`nifi.content.repository.directory.content2=/repos/content2` ++Providing three 
total locations, including  `nifi.content.repository.directory.default`.{code}

  was:
Some lines  contain just a `+` which is used for new lines, but without a space 
infront of the +, this isn't interpreted as a new line - instead the plus is 
rendered as text.

 
{code:java}
|`nifi.content.repository.directory.default`*|The location of the Content 
Repository. The default value is `./content_repository`. ++*NOTE*: Multiple 
content repositories can be specified by using the 
`nifi.content.repository.directory.` prefix with unique suffixes and separate 
paths as values. ++For example, to provide two additional locations to act as 
part of the content repository, a user could also specify additional properties 
with keys of: ++`nifi.content.repository.directory.content1=/repos/content1` 
+`nifi.content.repository.directory.content2=/repos/content2` ++Providing three 
total locations, including  `nifi.content.repository.directory.default`.{code}


> nifi.content.repository.directory.default* has incorrect formatting in admin 
> guide
> --
>
> Key: NIFI-9287
> URL: https://issues.apache.org/jira/browse/NIFI-9287
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Documentation & Website
>Affects Versions: 1.14.0
>Reporter: Alasdair Brown
>Assignee: Alasdair Brown
>Priority: Trivial
>
> Some lines contain just a `+` which is used for new lines, but without a 
> space infront of the +, this isn't always interpreted as a new line - 
>  
> {code:java}
> |`nifi.content.repository.directory.default`*|The location of the Content 
> Repository. The default value is `./content_repository`. ++*NOTE*: Multiple 
> content repositories can be specified by using the 
> `nifi.content.repository.directory.` prefix with unique suffixes and separate 
> paths as values. ++For example, to provide two additional locations to act as 
> part of the content repository, a user could also specify additional 
> properties with keys of: 
> ++`nifi.content.repository.directory.content1=/repos/content1` 
> +`nifi.content.repository.directory.content2=/repos/content2` ++Providing 
> three total locations, including  
> `nifi.content.repository.directory.default`.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8492) IPLookupService not working since 1.8.0?

2021-10-06 Thread Joel Berger (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425136#comment-17425136
 ] 

Joel Berger commented on NIFI-8492:
---

Please fix this, I wasted a whole day trying to figure out why this wouldn't 
work, and to find that its a bug in nifi and one that's already been fixed 
elsewhere was a real gut-punch. If it cannot be fixed perhaps at least update 
the documentation to indicate that it is currently broken so that the next 
person doesn't spend all day trying to understand the failure like I did.

> IPLookupService not working since 1.8.0?
> 
>
> Key: NIFI-8492
> URL: https://issues.apache.org/jira/browse/NIFI-8492
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.11.4, 1.13.2
>Reporter: Benjamin Charron
>Priority: Major
>  Labels: IPLookupService
>
> IPLookupService, like its cousin GeoEnrichIP, uses maxmind/DatabaseReader.java
> However, because they are in different NARs, they each have their own 
> "DatabaseReader.java
> ". GeoEnrichIP's version had a bug fixed back in 1.9.0 
> (https://issues.apache.org/jira/browse/NIFI-5814), but IPLookupService did 
> not.
> As far as I can tell, IPLookupService has also been broken since 1.8.0. We 
> get the following error with the IPLookupService in 1.11.4:
> {code}
> Caused by: java.lang.UnsupportedOperationException: null
> at java.util.Collections$UnmodifiableMap.put(Collections.java:1457)
> at 
> com.fasterxml.jackson.databind.node.ObjectNode.set(ObjectNode.java:370)
> at 
> org.apache.nifi.lookup.maxmind.DatabaseReader.get(DatabaseReader.java:158)
> at 
> org.apache.nifi.lookup.maxmind.DatabaseReader.city(DatabaseReader.java:194)
> at 
> org.apache.nifi.lookup.maxmind.IPLookupService.doLookup(IPLookupService.java:262)
> {code}
> Should the fixes in GeoEnrichIP's DatabaseReader also be copied to 
> IPLookupService's DatabaseReader?  Or is there a way for them to use the same 
> one without copying the file?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] sdairs commented on pull request #5438: NIFI-9029 Document Missing Properties in the Sys Admin Guide

2021-10-06 Thread GitBox


sdairs commented on pull request #5438:
URL: https://github.com/apache/nifi/pull/5438#issuecomment-936793979


   @markap14  Thanks for the solid description 😄  Hopefully I've captured it 
correctly in this commit 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (NIFI-9287) nifi.content.repository.directory.default* has incorrect formatting in admin guide

2021-10-06 Thread Alasdair Brown (Jira)
Alasdair Brown created NIFI-9287:


 Summary: nifi.content.repository.directory.default* has incorrect 
formatting in admin guide
 Key: NIFI-9287
 URL: https://issues.apache.org/jira/browse/NIFI-9287
 Project: Apache NiFi
  Issue Type: Bug
  Components: Documentation & Website
Affects Versions: 1.14.0
Reporter: Alasdair Brown
Assignee: Alasdair Brown


Some lines  contain just a `+` which is used for new lines, but without a space 
infront of the +, this isn't interpreted as a new line - instead the plus is 
rendered as text.

 
{code:java}
|`nifi.content.repository.directory.default`*|The location of the Content 
Repository. The default value is `./content_repository`. ++*NOTE*: Multiple 
content repositories can be specified by using the 
`nifi.content.repository.directory.` prefix with unique suffixes and separate 
paths as values. ++For example, to provide two additional locations to act as 
part of the content repository, a user could also specify additional properties 
with keys of: ++`nifi.content.repository.directory.content1=/repos/content1` 
+`nifi.content.repository.directory.content2=/repos/content2` ++Providing three 
total locations, including  `nifi.content.repository.directory.default`.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-9282) Support running with Java 17

2021-10-06 Thread Andrew Atwood (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425124#comment-17425124
 ] 

Andrew Atwood commented on NIFI-9282:
-

tried building with 11, but when running with 17, nifi fails to start with error
{noformat}
2021-10-06 17:41:36,780 ERROR [main] org.apache.nifi.NiFi Failure to launch 
NiFi due to org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] 
Unable to make protected final java.lang.Class 
java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
 throws java.lang.ClassFormatError accessible: module java.base does not "opens 
java.lang" to unnamed module @720f29f0
org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] Unable to make 
protected final java.lang.Class 
java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
 throws java.lang.ClassFormatError accessible: module java.base does not "opens 
java.lang" to unnamed module @720f29f0
at 
org.xerial.snappy.SnappyLoader.injectSnappyNativeLoader(SnappyLoader.java:297)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:227)
at org.xerial.snappy.Snappy.(Snappy.java:48)
at 
org.apache.nifi.processors.hive.PutHiveStreaming.(PutHiveStreaming.java:158)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:467)
at 
org.apache.nifi.nar.StandardExtensionDiscoveringManager.getClass(StandardExtensionDiscoveringManager.java:330)
at 
org.apache.nifi.documentation.DocGenerator.documentConfigurableComponent(DocGenerator.java:100)
at 
org.apache.nifi.documentation.DocGenerator.generate(DocGenerator.java:65)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1136)
at org.apache.nifi.NiFi.(NiFi.java:170)
at org.apache.nifi.NiFi.(NiFi.java:82)
at org.apache.nifi.NiFi.main(NiFi.java:331)
{noformat}

> Support running with Java 17
> 
>
> Key: NIFI-9282
> URL: https://issues.apache.org/jira/browse/NIFI-9282
> Project: Apache NiFi
>  Issue Type: Sub-task
>Reporter: Andrew Atwood
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] exceptionfactory commented on a change in pull request #5410: NIFI-9221: Add AWS SecretsManager Sensitive Props Provider

2021-10-06 Thread GitBox


exceptionfactory commented on a change in pull request #5410:
URL: https://github.com/apache/nifi/pull/5410#discussion_r723480724



##
File path: 
nifi-commons/nifi-sensitive-property-provider/src/main/java/org/apache/nifi/properties/AwsSecretsManagerSensitivePropertyProvider.java
##
@@ -0,0 +1,157 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.properties;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.databind.node.ObjectNode;
+import software.amazon.awssdk.services.secretsmanager.SecretsManagerClient;
+import 
software.amazon.awssdk.services.secretsmanager.model.GetSecretValueResponse;
+import 
software.amazon.awssdk.services.secretsmanager.model.ResourceNotFoundException;
+import 
software.amazon.awssdk.services.secretsmanager.model.SecretsManagerException;
+
+import java.util.Objects;
+import java.util.Optional;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReadWriteLock;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+public class AwsSecretsManagerSensitivePropertyProvider extends 
AbstractSensitivePropertyProvider {
+private final SecretsManagerClient client;
+private final ObjectMapper objectMapper;
+
+private final ReadWriteLock rwLock = new ReentrantReadWriteLock();
+private final Lock readLock = rwLock.readLock();
+private final Lock writeLock = rwLock.writeLock();
+
+AwsSecretsManagerSensitivePropertyProvider(final SecretsManagerClient 
client) {
+super(null);
+
+this.client = client;
+this.objectMapper = new ObjectMapper();
+}
+
+@Override
+public boolean isSupported() {
+return client != null;
+}
+
+@Override
+public String protect(final String unprotectedValue, final 
ProtectedPropertyContext context)
+throws SensitivePropertyProtectionException {
+Objects.requireNonNull(context, "Property context must be provided");
+Objects.requireNonNull(unprotectedValue, "Property value must be 
provided");
+
+if (client == null) {
+throw new SensitivePropertyProtectionException("AWS Secrets 
Manager Provider Not Configured");
+}
+
+try {
+writeLock.lock();
+final String secretName = context.getContextName();
+final Optional secretKeyValuesOptional = 
getSecretKeyValues(context);
+final ObjectNode secretObject = (ObjectNode) 
secretKeyValuesOptional.orElse(objectMapper.createObjectNode());
+
+secretObject.put(context.getPropertyName(), unprotectedValue);
+final String secretString = 
objectMapper.writeValueAsString(secretObject);
+
+if (secretKeyValuesOptional.isPresent()) {
+client.putSecretValue(builder -> 
builder.secretId(secretName).secretString(secretString));
+} else {
+client.createSecret(builder -> 
builder.name(secretName).secretString(secretString));
+}
+return context.getContextKey();
+} catch (final SecretsManagerException | JsonProcessingException e) {
+throw new SensitivePropertyProtectionException(String.format("AWS 
Secrets Manager Secret Could Not Be Stored for [%s]", context), e);
+} finally {
+writeLock.unlock();
+}
+}
+
+@Override
+public String unprotect(final String protectedValue, final 
ProtectedPropertyContext context)
+throws SensitivePropertyProtectionException {
+Objects.requireNonNull(context, "Property context must be provided");
+
+if (client == null) {
+throw new SensitivePropertyProtectionException("AWS Secrets 
Manager Provider Not Configured");
+}
+try {
+readLock.lock();
+
+String propertyValue = null;
+final Optional secretKeyValuesOptional = 
getSecretKeyValues(context);
+if (secretKeyValuesOptional.isPresent()) {
+final JsonNode secretKeyValues = secretKeyValuesOptional.get();
+final String pro

[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1178: MINIFICPP-1643 Add Managed Identity support for Azure processors

2021-10-06 Thread GitBox


fgerlits closed pull request #1178:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1178


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] szaszm opened a new pull request #1194: MINIFICPP-1661 change PublishKafka Queue Buffering Max Time to 5ms

2021-10-06 Thread GitBox


szaszm opened a new pull request #1194:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1194


   Queue Buffering Max Time is translated to the "queue.buffering.max.ms" 
rdkafka property, which defaults to 5 ms. Our default is 10 seconds. It's the 
time librdkafka waits to fill up an internal buffer until it sends the batch to 
the broker. So if minifi wants to send just a few messages, it has a 10 sec 
latency by default.
   
   This changes it to 5 milliseconds.
   
   ---
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [x] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [x] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the LICENSE file?
   - [x] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1188: MINIFICPP-1651: Added DefragTextFlowFiles processor

2021-10-06 Thread GitBox


martinzink commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r723401968



##
File path: libminifi/include/utils/FlowFileStore.h
##
@@ -0,0 +1,51 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+
+#include "core/FlowFile.h"
+namespace org::apache::nifi::minifi::utils {
+
+class FlowFileStore {
+ public:
+  std::unordered_set> getNewFlowFiles() {
+bool hasNewFlowFiles = true;
+if (!has_new_flow_file_.compare_exchange_strong(hasNewFlowFiles, false, 
std::memory_order_acquire, std::memory_order_relaxed)) {
+  return {};
+}
+std::lock_guard guard(flow_file_mutex_);
+return std::move(incoming_files_);
+  }
+
+  void put(const std::shared_ptr& flowFile)  {
+{
+  std::lock_guard guard(flow_file_mutex_);
+  incoming_files_.emplace(std::move(flowFile));

Review comment:
   I just moved this from BinFiles::FlowFileStore without looking too much 
into it, but will change it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1188: MINIFICPP-1651: Added DefragTextFlowFiles processor

2021-10-06 Thread GitBox


martinzink commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r723399621



##
File path: extensions/standard-processors/processors/DefragTextFlowFiles.h
##
@@ -0,0 +1,121 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+#include "core/Processor.h"
+#include "utils/FlowFileStore.h"
+#include "utils/Enum.h"
+#include "serialization/PayloadSerializer.h"
+
+
+namespace org::apache::nifi::minifi::processors {
+
+class DefragTextFlowFiles : public core::Processor {
+ public:
+  explicit DefragTextFlowFiles(const std::string& name,  const 
utils::Identifier& uuid = {})
+  : Processor(name, uuid) {
+logger_ = logging::LoggerFactory::getLogger();
+  }
+  EXTENSIONAPI static core::Relationship Self;
+  EXTENSIONAPI static core::Relationship Success;
+  EXTENSIONAPI static core::Relationship Original;
+  EXTENSIONAPI static core::Relationship Failure;
+
+  EXTENSIONAPI static core::Property Pattern;
+  EXTENSIONAPI static core::Property PatternLoc;
+  EXTENSIONAPI static core::Property MaxBufferAge;
+  EXTENSIONAPI static core::Property MaxBufferSize;
+
+  void initialize() override;
+  void onSchedule(core::ProcessContext* context, core::ProcessSessionFactory* 
sessionFactory) override;
+  void onTrigger(core::ProcessContext* context, core::ProcessSession* session) 
override;
+  void restore(const std::shared_ptr& flowFile) override;
+  std::set> getOutGoingConnections(const 
std::string &relationship) const override;
+
+  SMART_ENUM(PatternLocation,
+ (END_OF_MESSAGE, "End of Message"),
+ (START_OF_MESSAGE, "Start of Message")
+  )
+
+  class LastPatternFinder : public InputStreamCallback {
+   public:
+LastPatternFinder(const std::regex& pattern, PatternLocation 
pattern_location) : pattern_(pattern), pattern_location_(pattern_location) {}
+~LastPatternFinder() override = default;
+int64_t process(const std::shared_ptr& stream) override;
+
+bool foundPattern() const { return last_pattern_location.has_value(); }
+const std::optional& getLastPatternPosition() const { return 
last_pattern_location; }
+
+   protected:
+void searchContent(const std::string& content);
+const std::regex& pattern_;
+PatternLocation pattern_location_;
+std::optional last_pattern_location;
+  };

Review comment:
   I exposed it so it can be unit tested 
(extensions/standard-processors/tests/unit/DefragTextFlowFilesTests.cpp) 
TEST_CASE("FindLastRegexTest1", "[findlastregextest]")
   TEST_CASE("FindLastRegexTest2", "[findlastregextest2]")




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-site] gresockj closed pull request #50: Add David Handermann to PMC members

2021-10-06 Thread GitBox


gresockj closed pull request #50:
URL: https://github.com/apache/nifi-site/pull/50


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-site] gresockj commented on pull request #50: Add David Handermann to PMC members

2021-10-06 Thread GitBox


gresockj commented on pull request #50:
URL: https://github.com/apache/nifi-site/pull/50#issuecomment-936494252


   Merged


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-9283) Upgrade Log4j 2 and exclude Log4j 1.2

2021-10-06 Thread Joe Gresock (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Gresock updated NIFI-9283:
--
Fix Version/s: 1.15.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade Log4j 2 and exclude Log4j 1.2
> -
>
> Key: NIFI-9283
> URL: https://issues.apache.org/jira/browse/NIFI-9283
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions, MiNiFi, NiFi Registry
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Labels: dependency-upgrade
> Fix For: 1.15.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> A small number of NiFi components include transitive dependencies on Log4j 
> 1.2 that should be excluded to avoid runtime conflicts with Logback.
> Several extension modules include transitive dependencies on older versions 
> Log4j 2, which have associated vulnerabilities with custom socket-based 
> appender configurations.
> Framework and extension modules should exclude all references to Log4j 1.2, 
> and transitive dependencies on Log4j 2 should be upgraded to the latest 
> version 2.14.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #5440: NIFI-9283 Exclude Log4j 1.2 and Upgrade Log4j2 to 2.14.1

2021-10-06 Thread GitBox


asfgit closed pull request #5440:
URL: https://github.com/apache/nifi/pull/5440


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-9283) Upgrade Log4j 2 and exclude Log4j 1.2

2021-10-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17425039#comment-17425039
 ] 

ASF subversion and git services commented on NIFI-9283:
---

Commit 4bcd03024a419afdf40d464bda716f0b9d21925b in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=4bcd030 ]

NIFI-9283 Excluded Log4j 1.2 and upgraded Log4j2 to 2.14.1

Signed-off-by: Joe Gresock 

This closes #5440.


> Upgrade Log4j 2 and exclude Log4j 1.2
> -
>
> Key: NIFI-9283
> URL: https://issues.apache.org/jira/browse/NIFI-9283
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions, MiNiFi, NiFi Registry
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Labels: dependency-upgrade
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A small number of NiFi components include transitive dependencies on Log4j 
> 1.2 that should be excluded to avoid runtime conflicts with Logback.
> Several extension modules include transitive dependencies on older versions 
> Log4j 2, which have associated vulnerabilities with custom socket-based 
> appender configurations.
> Framework and extension modules should exclude all references to Log4j 1.2, 
> and transitive dependencies on Log4j 2 should be upgraded to the latest 
> version 2.14.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adam-markovics commented on a change in pull request #1191: MINIFICPP-1566 - Annotate maximum allowed threads for processors

2021-10-06 Thread GitBox


adam-markovics commented on a change in pull request #1191:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1191#discussion_r723384271



##
File path: extensions/windows-event-log/ConsumeWindowsEventLog.cpp
##
@@ -188,7 +188,6 @@ ConsumeWindowsEventLog::ConsumeWindowsEventLog(const 
std::string& name, const ut
 }
 
 void ConsumeWindowsEventLog::notifyStop() {
-  std::lock_guard lock(on_trigger_mutex_);
   logger_->log_trace("start notifyStop");
   bookmark_.reset();

Review comment:
   Yes, it's possible, I looked it up. I am putting the mutex back.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1188: MINIFICPP-1651: Added DefragTextFlowFiles processor

2021-10-06 Thread GitBox


adamdebreceni commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r723384022



##
File path: extensions/standard-processors/processors/DefragTextFlowFiles.h
##
@@ -0,0 +1,121 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+#include 
+
+#include "core/Processor.h"
+#include "utils/FlowFileStore.h"
+#include "utils/Enum.h"
+#include "serialization/PayloadSerializer.h"
+
+
+namespace org::apache::nifi::minifi::processors {
+
+class DefragTextFlowFiles : public core::Processor {
+ public:
+  explicit DefragTextFlowFiles(const std::string& name,  const 
utils::Identifier& uuid = {})
+  : Processor(name, uuid) {
+logger_ = logging::LoggerFactory::getLogger();
+  }
+  EXTENSIONAPI static core::Relationship Self;
+  EXTENSIONAPI static core::Relationship Success;
+  EXTENSIONAPI static core::Relationship Original;
+  EXTENSIONAPI static core::Relationship Failure;
+
+  EXTENSIONAPI static core::Property Pattern;
+  EXTENSIONAPI static core::Property PatternLoc;
+  EXTENSIONAPI static core::Property MaxBufferAge;
+  EXTENSIONAPI static core::Property MaxBufferSize;
+
+  void initialize() override;
+  void onSchedule(core::ProcessContext* context, core::ProcessSessionFactory* 
sessionFactory) override;
+  void onTrigger(core::ProcessContext* context, core::ProcessSession* session) 
override;
+  void restore(const std::shared_ptr& flowFile) override;
+  std::set> getOutGoingConnections(const 
std::string &relationship) const override;
+
+  SMART_ENUM(PatternLocation,
+ (END_OF_MESSAGE, "End of Message"),
+ (START_OF_MESSAGE, "Start of Message")
+  )
+
+  class LastPatternFinder : public InputStreamCallback {
+   public:
+LastPatternFinder(const std::regex& pattern, PatternLocation 
pattern_location) : pattern_(pattern), pattern_location_(pattern_location) {}
+~LastPatternFinder() override = default;
+int64_t process(const std::shared_ptr& stream) override;
+
+bool foundPattern() const { return last_pattern_location.has_value(); }

Review comment:
   as `getLastPatternPosition` already returns an optional we could rely on 
that instead of this method




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] markap14 commented on pull request #5438: NIFI-9029 Document Missing Properties in the Sys Admin Guide

2021-10-06 Thread GitBox


markap14 commented on pull request #5438:
URL: https://github.com/apache/nifi/pull/5438#issuecomment-936447902


   @sdairs depending on the operating system, hardware, etc. in many 
environments, we can create & write files faster than we can delete them. As a 
result, we can have a situation where we create a lot of files in the content 
repository - so much so that we cannot keep up with deleting data that's no 
longer needed, and we can start running out of disk space.
   
   To prevent this, we have a backpressure mechanism where the content repo 
decides that the threshold has been reached, so it's not going to allow any new 
FlowFIles to be written until a background thread has had a chance to archive 
or destroy unneeded data in the content repository.
   
   So those 2 properties work together. When 
`nifi.content.repository.archive.backpressure.percentage` is reached, it 
applies backpressure and prevents writing until the background thread reduces 
disk usage to below the `nifi.content.repository.archive.max.usage.percentage` 
threshold. So if `nifi.content.repository.archive.max.usage.percentage` is set 
to 50%, and `nifi.content.repository.archive.backpressure.percentage` is set to 
60% it will allow content to be written until repo is 60% full. Then it will 
prevent it until repo is less than 50% full. If 
`nifi.content.repository.archive.backpressure.percentage` is not set, it 
defaults to 2% more than 
`nifi.content.repository.archive.max.usage.percentage`. So 52% by default.
   
   Or, said another way - they function together as the low-water mark & 
high-water mark :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1188: MINIFICPP-1651: Added DefragTextFlowFiles processor

2021-10-06 Thread GitBox


adamdebreceni commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r723365018



##
File path: extensions/standard-processors/processors/DefragTextFlowFiles.cpp
##
@@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DefragTextFlowFiles.h"
+
+#include 
+
+#include "core/Resource.h"
+#include "serialization/PayloadSerializer.h"
+#include "utils/TextFragmentUtils.h"
+
+
+namespace org::apache::nifi::minifi::processors {
+
+core::Relationship DefragTextFlowFiles::Success("success", "Flowfiles that 
have no fragmented messages in them");
+core::Relationship DefragTextFlowFiles::Original("original", "The FlowFiles 
that were used to create the defragmented flowfiles");
+core::Relationship DefragTextFlowFiles::Failure("failure", "Flowfiles that 
failed the defragmentation process");
+core::Relationship DefragTextFlowFiles::Self("__self__", "Marks the FlowFile 
to be owned by this processor");
+
+core::Property DefragTextFlowFiles::Pattern(
+core::PropertyBuilder::createProperty("Pattern")
+->withDescription("A regular expression to match at the start or end 
of messages.")
+->withDefaultValue("")->isRequired(true)->build());
+
+core::Property DefragTextFlowFiles::PatternLoc(
+core::PropertyBuilder::createProperty("Pattern 
Location")->withDescription("Where to look for the pattern.")
+->withAllowableValues(PatternLocation::values())
+
->withDefaultValue(toString(PatternLocation::START_OF_MESSAGE))->build());
+
+
+core::Property DefragTextFlowFiles::MaxBufferSize(
+core::PropertyBuilder::createProperty("Max Buffer Size")
+->withDescription("The maximum buffer size, if the buffer exceeds 
this, it will be transferred to failure. Expected format is  ")
+
->withType(core::StandardValidators::get().DATA_SIZE_VALIDATOR)->build());
+
+core::Property DefragTextFlowFiles::MaxBufferAge(
+core::PropertyBuilder::createProperty("Max Buffer Age")->
+withDescription("The maximum age of a buffer after which the buffer 
will be transferred to failure. Expected format is  ")->build());
+
+void DefragTextFlowFiles::initialize() {
+  std::lock_guard defrag_lock(defrag_mutex_);
+
+  setSupportedRelationships({Success, Original, Failure});
+  setSupportedProperties({Pattern, PatternLoc, MaxBufferAge, MaxBufferSize});
+}
+
+void DefragTextFlowFiles::onSchedule(core::ProcessContext* context, 
core::ProcessSessionFactory*) {
+  std::lock_guard defrag_lock(defrag_mutex_);
+
+  std::string max_buffer_age_str;
+  if (context->getProperty(MaxBufferAge.getName(), max_buffer_age_str)) {
+core::TimeUnit unit;
+uint64_t max_buffer_age;
+if (core::Property::StringToTime(max_buffer_age_str, max_buffer_age, unit) 
&& core::Property::ConvertTimeUnitToMS(max_buffer_age, unit, max_buffer_age)) {
+  buffer_.setMaxAge(max_buffer_age);
+  logger_->log_trace("The Buffer maximum age is configured to be %d", 
max_buffer_age);
+}
+  }
+
+  std::string max_buffer_size_str;
+  if (context->getProperty(MaxBufferSize.getName(), max_buffer_size_str)) {
+uint64_t max_buffer_size = 
core::DataSizeValue(max_buffer_size_str).getValue();
+if (max_buffer_size > 0) {
+  buffer_.setMaxSize(max_buffer_size);
+  logger_->log_trace("The Buffer maximum size is configured to be %d", 
max_buffer_size);
+}
+  }
+
+  context->getProperty(PatternLoc.getName(), pattern_location_);
+
+  std::string pattern_str;
+  if (context->getProperty(Pattern.getName(), pattern_str)) {
+pattern_ = std::regex(pattern_str);
+logger_->log_trace("The Pattern is configured to be %s", pattern_str);
+  }
+}
+
+int64_t DefragTextFlowFiles::LastPatternFinder::process(const 
std::shared_ptr& stream) {
+  if (nullptr == stream)
+return 0;
+  std::vector buffer;
+  const auto ret = stream->read(buffer, stream->size());
+  if (io::isError(ret))
+return -1;
+  std::string content(buffer.begin(), buffer.end());
+  searchContent(content);
+
+  return 0;
+}
+
+void DefragTextFlowFiles::LastPatternFinder::searchContent(const std::string 
&content) {
+  auto matches_begin = std::sregex_iterator(content.begin(), content.end()

[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1188: MINIFICPP-1651: Added DefragTextFlowFiles processor

2021-10-06 Thread GitBox


adamdebreceni commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r723363396



##
File path: extensions/standard-processors/processors/DefragTextFlowFiles.cpp
##
@@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DefragTextFlowFiles.h"
+
+#include 
+
+#include "core/Resource.h"
+#include "serialization/PayloadSerializer.h"
+#include "utils/TextFragmentUtils.h"
+
+
+namespace org::apache::nifi::minifi::processors {
+
+core::Relationship DefragTextFlowFiles::Success("success", "Flowfiles that 
have no fragmented messages in them");
+core::Relationship DefragTextFlowFiles::Original("original", "The FlowFiles 
that were used to create the defragmented flowfiles");
+core::Relationship DefragTextFlowFiles::Failure("failure", "Flowfiles that 
failed the defragmentation process");
+core::Relationship DefragTextFlowFiles::Self("__self__", "Marks the FlowFile 
to be owned by this processor");
+
+core::Property DefragTextFlowFiles::Pattern(
+core::PropertyBuilder::createProperty("Pattern")
+->withDescription("A regular expression to match at the start or end 
of messages.")
+->withDefaultValue("")->isRequired(true)->build());
+
+core::Property DefragTextFlowFiles::PatternLoc(
+core::PropertyBuilder::createProperty("Pattern 
Location")->withDescription("Where to look for the pattern.")
+->withAllowableValues(PatternLocation::values())
+
->withDefaultValue(toString(PatternLocation::START_OF_MESSAGE))->build());
+
+
+core::Property DefragTextFlowFiles::MaxBufferSize(
+core::PropertyBuilder::createProperty("Max Buffer Size")
+->withDescription("The maximum buffer size, if the buffer exceeds 
this, it will be transferred to failure. Expected format is  ")
+
->withType(core::StandardValidators::get().DATA_SIZE_VALIDATOR)->build());
+
+core::Property DefragTextFlowFiles::MaxBufferAge(
+core::PropertyBuilder::createProperty("Max Buffer Age")->
+withDescription("The maximum age of a buffer after which the buffer 
will be transferred to failure. Expected format is  ")->build());
+
+void DefragTextFlowFiles::initialize() {
+  std::lock_guard defrag_lock(defrag_mutex_);
+
+  setSupportedRelationships({Success, Original, Failure});
+  setSupportedProperties({Pattern, PatternLoc, MaxBufferAge, MaxBufferSize});
+}
+
+void DefragTextFlowFiles::onSchedule(core::ProcessContext* context, 
core::ProcessSessionFactory*) {
+  std::lock_guard defrag_lock(defrag_mutex_);
+
+  std::string max_buffer_age_str;
+  if (context->getProperty(MaxBufferAge.getName(), max_buffer_age_str)) {
+core::TimeUnit unit;
+uint64_t max_buffer_age;
+if (core::Property::StringToTime(max_buffer_age_str, max_buffer_age, unit) 
&& core::Property::ConvertTimeUnitToMS(max_buffer_age, unit, max_buffer_age)) {
+  buffer_.setMaxAge(max_buffer_age);
+  logger_->log_trace("The Buffer maximum age is configured to be %d", 
max_buffer_age);
+}
+  }
+
+  std::string max_buffer_size_str;
+  if (context->getProperty(MaxBufferSize.getName(), max_buffer_size_str)) {
+uint64_t max_buffer_size = 
core::DataSizeValue(max_buffer_size_str).getValue();
+if (max_buffer_size > 0) {
+  buffer_.setMaxSize(max_buffer_size);
+  logger_->log_trace("The Buffer maximum size is configured to be %d", 
max_buffer_size);
+}
+  }
+
+  context->getProperty(PatternLoc.getName(), pattern_location_);
+
+  std::string pattern_str;
+  if (context->getProperty(Pattern.getName(), pattern_str)) {
+pattern_ = std::regex(pattern_str);
+logger_->log_trace("The Pattern is configured to be %s", pattern_str);
+  }
+}
+
+int64_t DefragTextFlowFiles::LastPatternFinder::process(const 
std::shared_ptr& stream) {
+  if (nullptr == stream)
+return 0;
+  std::vector buffer;
+  const auto ret = stream->read(buffer, stream->size());
+  if (io::isError(ret))
+return -1;
+  std::string content(buffer.begin(), buffer.end());
+  searchContent(content);
+
+  return 0;
+}
+
+void DefragTextFlowFiles::LastPatternFinder::searchContent(const std::string 
&content) {
+  auto matches_begin = std::sregex_iterator(content.begin(), content.end()

[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1188: MINIFICPP-1651: Added DefragTextFlowFiles processor

2021-10-06 Thread GitBox


adamdebreceni commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r723362437



##
File path: extensions/standard-processors/processors/DefragTextFlowFiles.cpp
##
@@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DefragTextFlowFiles.h"
+
+#include 
+
+#include "core/Resource.h"
+#include "serialization/PayloadSerializer.h"
+#include "utils/TextFragmentUtils.h"
+
+
+namespace org::apache::nifi::minifi::processors {
+
+core::Relationship DefragTextFlowFiles::Success("success", "Flowfiles that 
have no fragmented messages in them");
+core::Relationship DefragTextFlowFiles::Original("original", "The FlowFiles 
that were used to create the defragmented flowfiles");
+core::Relationship DefragTextFlowFiles::Failure("failure", "Flowfiles that 
failed the defragmentation process");
+core::Relationship DefragTextFlowFiles::Self("__self__", "Marks the FlowFile 
to be owned by this processor");
+
+core::Property DefragTextFlowFiles::Pattern(
+core::PropertyBuilder::createProperty("Pattern")
+->withDescription("A regular expression to match at the start or end 
of messages.")
+->withDefaultValue("")->isRequired(true)->build());
+
+core::Property DefragTextFlowFiles::PatternLoc(
+core::PropertyBuilder::createProperty("Pattern 
Location")->withDescription("Where to look for the pattern.")
+->withAllowableValues(PatternLocation::values())
+
->withDefaultValue(toString(PatternLocation::START_OF_MESSAGE))->build());
+
+
+core::Property DefragTextFlowFiles::MaxBufferSize(
+core::PropertyBuilder::createProperty("Max Buffer Size")
+->withDescription("The maximum buffer size, if the buffer exceeds 
this, it will be transferred to failure. Expected format is  ")
+
->withType(core::StandardValidators::get().DATA_SIZE_VALIDATOR)->build());
+
+core::Property DefragTextFlowFiles::MaxBufferAge(
+core::PropertyBuilder::createProperty("Max Buffer Age")->
+withDescription("The maximum age of a buffer after which the buffer 
will be transferred to failure. Expected format is  ")->build());
+
+void DefragTextFlowFiles::initialize() {
+  std::lock_guard defrag_lock(defrag_mutex_);
+
+  setSupportedRelationships({Success, Original, Failure});
+  setSupportedProperties({Pattern, PatternLoc, MaxBufferAge, MaxBufferSize});
+}
+
+void DefragTextFlowFiles::onSchedule(core::ProcessContext* context, 
core::ProcessSessionFactory*) {
+  std::lock_guard defrag_lock(defrag_mutex_);
+
+  std::string max_buffer_age_str;
+  if (context->getProperty(MaxBufferAge.getName(), max_buffer_age_str)) {
+core::TimeUnit unit;
+uint64_t max_buffer_age;
+if (core::Property::StringToTime(max_buffer_age_str, max_buffer_age, unit) 
&& core::Property::ConvertTimeUnitToMS(max_buffer_age, unit, max_buffer_age)) {
+  buffer_.setMaxAge(max_buffer_age);
+  logger_->log_trace("The Buffer maximum age is configured to be %d", 
max_buffer_age);
+}
+  }
+
+  std::string max_buffer_size_str;
+  if (context->getProperty(MaxBufferSize.getName(), max_buffer_size_str)) {
+uint64_t max_buffer_size = 
core::DataSizeValue(max_buffer_size_str).getValue();
+if (max_buffer_size > 0) {
+  buffer_.setMaxSize(max_buffer_size);
+  logger_->log_trace("The Buffer maximum size is configured to be %d", 
max_buffer_size);
+}
+  }
+
+  context->getProperty(PatternLoc.getName(), pattern_location_);
+
+  std::string pattern_str;
+  if (context->getProperty(Pattern.getName(), pattern_str)) {
+pattern_ = std::regex(pattern_str);
+logger_->log_trace("The Pattern is configured to be %s", pattern_str);
+  }
+}
+
+int64_t DefragTextFlowFiles::LastPatternFinder::process(const 
std::shared_ptr& stream) {
+  if (nullptr == stream)
+return 0;
+  std::vector buffer;
+  const auto ret = stream->read(buffer, stream->size());
+  if (io::isError(ret))
+return -1;
+  std::string content(buffer.begin(), buffer.end());
+  searchContent(content);
+
+  return 0;
+}
+
+void DefragTextFlowFiles::LastPatternFinder::searchContent(const std::string 
&content) {
+  auto matches_begin = std::sregex_iterator(content.begin(), content.end()

[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1188: MINIFICPP-1651: Added DefragTextFlowFiles processor

2021-10-06 Thread GitBox


szaszm commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r723156567



##
File path: extensions/standard-processors/processors/DefragTextFlowFiles.cpp
##
@@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DefragTextFlowFiles.h"
+
+#include 
+
+#include "core/Resource.h"
+#include "serialization/PayloadSerializer.h"
+#include "utils/TextFragmentUtils.h"
+
+
+namespace org::apache::nifi::minifi::processors {
+
+core::Relationship DefragTextFlowFiles::Success("success", "Flowfiles that 
have no fragmented messages in them");
+core::Relationship DefragTextFlowFiles::Original("original", "The FlowFiles 
that were used to create the defragmented flowfiles");

Review comment:
   Is there any difference between using this and cloning the flow files at 
the source(s) by starting two connections from the source relationship(s)?

##
File path: libminifi/include/utils/FlowFileStore.h
##
@@ -0,0 +1,51 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#pragma once
+
+#include 
+#include 
+#include 
+
+#include "core/FlowFile.h"
+namespace org::apache::nifi::minifi::utils {
+
+class FlowFileStore {
+ public:
+  std::unordered_set> getNewFlowFiles() {
+bool hasNewFlowFiles = true;
+if (!has_new_flow_file_.compare_exchange_strong(hasNewFlowFiles, false, 
std::memory_order_acquire, std::memory_order_relaxed)) {
+  return {};
+}
+std::lock_guard guard(flow_file_mutex_);
+return std::move(incoming_files_);
+  }
+
+  void put(const std::shared_ptr& flowFile)  {
+{
+  std::lock_guard guard(flow_file_mutex_);
+  incoming_files_.emplace(std::move(flowFile));

Review comment:
   Why try to move from a const reference? Take by value or just insert a 
copy.

##
File path: libminifi/test/ReadFromFlowFileTestProcessor.cpp
##
@@ -0,0 +1,67 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "ReadFromFlowFileTestProcessor.h"
+
+#include 
+#include 
+#include 
+
+namespace org::apache::nifi::minifi::processors {
+
+const std::string ReadFromFlowFileTestProcessor::OnScheduleLogStr = 
"ReadFromFlowFileTestProcessor::onSchedule executed";
+const std::string ReadFromFlowFileTestProcessor::OnTriggerLogStr = 
"ReadFromFlowFileTestProcessor::onTrigger executed";
+const std::string ReadFromFlowFileTestProcessor::OnUnScheduleLogStr = 
"ReadFromFlowFileTestProcessor::onUnSchedule";
+
+core::Relationship ReadFromFlowFileTestProcessor::Success("success", "success 
operational on the flow record");
+
+void ReadFromFlowFileTestProcessor::initialize() {
+  setSupportedRelationships({ Succes

[jira] [Created] (MINIFICPP-1661) PublishKafka Queue Buffering Max Time should default to 5ms

2021-10-06 Thread Marton Szasz (Jira)
Marton Szasz created MINIFICPP-1661:
---

 Summary: PublishKafka Queue Buffering Max Time should default to 
5ms
 Key: MINIFICPP-1661
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1661
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Marton Szasz
Assignee: Marton Szasz


Queue Buffering Max Time is translated to the "queue.buffering.max.ms" rdkafka 
property, which defaults to 5 ms. Our default is 10 seconds. It's the time 
librdkafka waits to fill up an internal buffer until it sends the batch to the 
broker. So if minifi wants to send just a few messages, it has a 10 sec latency 
by default.

The issue is about changing it to 5 milliseconds, the librdkafka default, also 
a much more sane default.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1188: MINIFICPP-1651: Added DefragTextFlowFiles processor

2021-10-06 Thread GitBox


adamdebreceni commented on a change in pull request #1188:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1188#discussion_r723339074



##
File path: extensions/standard-processors/processors/DefragTextFlowFiles.cpp
##
@@ -0,0 +1,336 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "DefragTextFlowFiles.h"
+
+#include 
+
+#include "core/Resource.h"
+#include "serialization/PayloadSerializer.h"
+#include "utils/TextFragmentUtils.h"
+
+
+namespace org::apache::nifi::minifi::processors {
+
+core::Relationship DefragTextFlowFiles::Success("success", "Flowfiles that 
have no fragmented messages in them");
+core::Relationship DefragTextFlowFiles::Original("original", "The FlowFiles 
that were used to create the defragmented flowfiles");
+core::Relationship DefragTextFlowFiles::Failure("failure", "Flowfiles that 
failed the defragmentation process");
+core::Relationship DefragTextFlowFiles::Self("__self__", "Marks the FlowFile 
to be owned by this processor");
+
+core::Property DefragTextFlowFiles::Pattern(
+core::PropertyBuilder::createProperty("Pattern")
+->withDescription("A regular expression to match at the start or end 
of messages.")
+->withDefaultValue("")->isRequired(true)->build());
+
+core::Property DefragTextFlowFiles::PatternLoc(
+core::PropertyBuilder::createProperty("Pattern 
Location")->withDescription("Where to look for the pattern.")
+->withAllowableValues(PatternLocation::values())
+
->withDefaultValue(toString(PatternLocation::START_OF_MESSAGE))->build());
+
+
+core::Property DefragTextFlowFiles::MaxBufferSize(
+core::PropertyBuilder::createProperty("Max Buffer Size")
+->withDescription("The maximum buffer size, if the buffer exceeds 
this, it will be transferred to failure. Expected format is  ")
+
->withType(core::StandardValidators::get().DATA_SIZE_VALIDATOR)->build());
+
+core::Property DefragTextFlowFiles::MaxBufferAge(
+core::PropertyBuilder::createProperty("Max Buffer Age")->
+withDescription("The maximum age of a buffer after which the buffer 
will be transferred to failure. Expected format is  ")->build());
+
+void DefragTextFlowFiles::initialize() {
+  std::lock_guard defrag_lock(defrag_mutex_);
+
+  setSupportedRelationships({Success, Original, Failure});
+  setSupportedProperties({Pattern, PatternLoc, MaxBufferAge, MaxBufferSize});
+}
+
+void DefragTextFlowFiles::onSchedule(core::ProcessContext* context, 
core::ProcessSessionFactory*) {
+  std::lock_guard defrag_lock(defrag_mutex_);
+
+  std::string max_buffer_age_str;
+  if (context->getProperty(MaxBufferAge.getName(), max_buffer_age_str)) {
+core::TimeUnit unit;
+uint64_t max_buffer_age;
+if (core::Property::StringToTime(max_buffer_age_str, max_buffer_age, unit) 
&& core::Property::ConvertTimeUnitToMS(max_buffer_age, unit, max_buffer_age)) {
+  buffer_.setMaxAge(max_buffer_age);
+  logger_->log_trace("The Buffer maximum age is configured to be %d", 
max_buffer_age);

Review comment:
   we could add the unit here and to the buffer max size log as well




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1193: MINIFICPP-1660 - Use default extension path if none is provided

2021-10-06 Thread GitBox


adamdebreceni commented on a change in pull request #1193:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1193#discussion_r723310251



##
File path: libminifi/src/core/extension/ExtensionManager.cpp
##
@@ -86,14 +86,21 @@ ExtensionManager& ExtensionManager::get() {
   return instance;
 }
 
+constexpr const char* DEFAULT_EXTENSION_PATH = "../extensions/*";
+
 bool ExtensionManager::initialize(const std::shared_ptr& config) {
   static bool initialized = ([&] {
 logger_->log_trace("Initializing extensions");
 // initialize executable
 active_module_->initialize(config);
-std::optional pattern = config ? 
config->get(nifi_extension_path) : std::nullopt;
-if (!pattern) return;
-auto candidates = 
utils::file::match(utils::file::FilePattern(pattern.value(), [&] 
(std::string_view subpattern, std::string_view error_msg) {
+std::string pattern = [&] {
+  auto opt_pattern = config->get(nifi_extension_path);

Review comment:
   in theory this shouldn't occur, but we better handle it anyway, added an 
error log




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-site] bbende merged pull request #51: Add GitHub link under Development section of navigation

2021-10-06 Thread GitBox


bbende merged pull request #51:
URL: https://github.com/apache/nifi-site/pull/51


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1193: MINIFICPP-1660 - Use default extension path if none is provided

2021-10-06 Thread GitBox


fgerlits commented on a change in pull request #1193:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1193#discussion_r723286567



##
File path: libminifi/src/core/extension/ExtensionManager.cpp
##
@@ -86,14 +86,21 @@ ExtensionManager& ExtensionManager::get() {
   return instance;
 }
 
+constexpr const char* DEFAULT_EXTENSION_PATH = "../extensions/*";
+
 bool ExtensionManager::initialize(const std::shared_ptr& config) {
   static bool initialized = ([&] {
 logger_->log_trace("Initializing extensions");
 // initialize executable
 active_module_->initialize(config);
-std::optional pattern = config ? 
config->get(nifi_extension_path) : std::nullopt;
-if (!pattern) return;
-auto candidates = 
utils::file::match(utils::file::FilePattern(pattern.value(), [&] 
(std::string_view subpattern, std::string_view error_msg) {
+std::string pattern = [&] {
+  auto opt_pattern = config->get(nifi_extension_path);

Review comment:
   We used to return without doing anything when `config` is null; now we 
are going to crash.  Is that intentional?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #1193: MINIFICPP-1660 - Use default extension path if none is provided

2021-10-06 Thread GitBox


adamdebreceni opened a new pull request #1193:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1193


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (MINIFICPP-1660) Add default extension path

2021-10-06 Thread Adam Debreceni (Jira)
Adam Debreceni created MINIFICPP-1660:
-

 Summary: Add default extension path
 Key: MINIFICPP-1660
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1660
 Project: Apache NiFi MiNiFi C++
  Issue Type: Bug
Reporter: Adam Debreceni
Assignee: Adam Debreceni


In case the user did not provide a `nifi.extension.path` we should default to a 
sane value (`../extension/*`) to maintain config-backwards-compatibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1191: MINIFICPP-1566 - Annotate maximum allowed threads for processors

2021-10-06 Thread GitBox


adamdebreceni commented on a change in pull request #1191:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1191#discussion_r723191038



##
File path: extensions/windows-event-log/ConsumeWindowsEventLog.cpp
##
@@ -188,7 +188,6 @@ ConsumeWindowsEventLog::ConsumeWindowsEventLog(const 
std::string& name, const ut
 }
 
 void ConsumeWindowsEventLog::notifyStop() {
-  std::lock_guard lock(on_trigger_mutex_);
   logger_->log_trace("start notifyStop");
   bookmark_.reset();

Review comment:
   could an `onTrigger` be running when this `notifyStop` is called? if it 
could, the mutex protects the bookmark so we might not be able to remove it




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #1191: MINIFICPP-1566 - Annotate maximum allowed threads for processors

2021-10-06 Thread GitBox


adamdebreceni commented on a change in pull request #1191:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1191#discussion_r723189078



##
File path: libminifi/src/core/Processor.cpp
##
@@ -379,25 +379,37 @@ std::shared_ptr 
Processor::pickIncomingConnection() {
   return getNextIncomingConnectionImpl(rel_guard);
 }
 
-void Processor::validateAnnotations() const {
+void Processor::validateAnnotations() {
+  validateInputRequirements();
+  validateThreads();
+}
+
+void Processor::validateInputRequirements() const {
   switch (getInputRequirement()) {
 case annotation::Input::INPUT_REQUIRED: {
   if (!hasIncomingConnections()) {
 throw Exception(PROCESS_SCHEDULE_EXCEPTION, "INPUT_REQUIRED was 
specified for the processor, but no incoming connections were found");
   }
-  return;
+  break;
 }
 case annotation::Input::INPUT_ALLOWED:
-  return;
+  break;
 case annotation::Input::INPUT_FORBIDDEN: {
   if (hasIncomingConnections()) {
 throw Exception(PROCESS_SCHEDULE_EXCEPTION, "INPUT_FORBIDDEN was 
specified for the processor, but there are incoming connections");
   }
-  return;
 }
   }
 }
 
+void Processor::validateThreads() {
+  if (isSingleThreaded() && max_concurrent_tasks_ > 1) {
+logger_->log_warn("Processor %s can not be run in parallel, its \"max 
concurrent tasks\" value is too high. "
+  "It was set to 1 from %d.", name_, 
max_concurrent_tasks_);
+max_concurrent_tasks_ = 1;

Review comment:
   I think we should move this warning and the fallback to 1 to 
`Processor::setMaxConcurrentTasks`, also there is a `setMaxConcurrentTasks` in 
Processor as well in Connectable, but it is not virtual, we should make it 
virtual and let the Processor override it




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] gresockj commented on pull request #5435: NIFI-9266 Add Azure Key Vault Secret SPP

2021-10-06 Thread GitBox


gresockj commented on pull request #5435:
URL: https://github.com/apache/nifi/pull/5435#issuecomment-936079968


   I just tested this out, and it works as expected.  Nice work, 
@exceptionfactory!  I used an incremental approach at adding the Azure 
credentials configuration, and the error messages were sensible along the way.  
Looks good to me, but I'll wait for @jfrazee's review before merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (MINIFICPP-1645) RocksDbPersistableKeyValueStoreService is included twice in the manifest

2021-10-06 Thread Ferenc Gerlits (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Gerlits resolved MINIFICPP-1645.
---
Resolution: Fixed

> RocksDbPersistableKeyValueStoreService is included twice in the manifest
> 
>
> Key: MINIFICPP-1645
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1645
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Ferenc Gerlits
>Assignee: Ferenc Gerlits
>Priority: Minor
> Fix For: 0.11.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> I suspect this is because it is registered under two different names using 
> {{REGISTER_RESOURCE_AS}}.  This seems to be a new issue introduced at the 
> time of the dynamic library change.  
> {{RocksDbPersistableKeyValueStoreService}} is the only service registered 
> under more than one name from the processors or controller services included 
> in the manifest.  (Some internal resources also have two names, but these are 
> not listed in the manifest.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-9227) Run Once not working when scheduling strategy is CRON or Event driven

2021-10-06 Thread Hsin-Ying Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hsin-Ying Lee updated NIFI-9227:

Status: Patch Available  (was: Open)

> Run Once not working when scheduling strategy is CRON or Event driven
> -
>
> Key: NIFI-9227
> URL: https://issues.apache.org/jira/browse/NIFI-9227
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.14.0
> Environment: Centos7, OpenJDK8, Chrome
>Reporter: Mermillod
>Assignee: Hsin-Ying Lee
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> "Run Once"  whith a GenerateFlowFile has a unexpected behaviour.
>  
>  # It works (the message is generated as expected )
>  # but It stucks with 1 active threads running (one by cluster node if 
> cluster)
>  # Using "terminate" on the processor to kill thread works



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] s9514171 opened a new pull request #5445: NIFI-9227 Run Once not working when scheduling strategy is CRON or Ev…

2021-10-06 Thread GitBox


s9514171 opened a new pull request #5445:
URL: https://github.com/apache/nifi/pull/5445


   …ent driven
   
   
   
    Description of PR
   
   
[https://issues.apache.org/jira/browse/NIFI-9227](https://issues.apache.org/jira/browse/NIFI-9227)
   
   This PR fix the thread hang after run once in CRON scheduling strategy and 
support run once when processor is set in Event Driven
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `main`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [X] Have you written or updated unit tests to verify your changes?
   - [X] Have you verified that the full build is successful on JDK 8?
   - [X] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] simonbence commented on a change in pull request #5356: NIFI-9183: Add a command-line option to save status history

2021-10-06 Thread GitBox


simonbence commented on a change in pull request #5356:
URL: https://github.com/apache/nifi/pull/5356#discussion_r722974016



##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/spring/StatusHistoryRepositoryFactoryBean.java
##
@@ -40,8 +40,6 @@
 
 @Override
 public StatusHistoryRepository getObject() throws Exception {
-nifiProperties = applicationContext.getBean("nifiProperties", 
NiFiProperties.class);

Review comment:
   The `applicationContext` attribute is not in use any more, please remove 
it

##
File path: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-headless-server/src/main/java/org/apache/nifi/headless/HeadlessNiFiServer.java
##
@@ -130,7 +131,11 @@ public void preDestruction() throws 
AuthorizerDestructionException {
 BulletinRepository bulletinRepository = new 
VolatileBulletinRepository();
 StandardFlowRegistryClient flowRegistryClient = new 
StandardFlowRegistryClient();
 flowRegistryClient.setProperties(props);
-StatusHistoryRepository statusHistoryRepository = new 
VolatileComponentStatusRepository();
+
+final StatusHistoryRepositoryFactoryBean 
statusHistoryRepositoryFactoryBean = new StatusHistoryRepositoryFactoryBean();

Review comment:
   As I stated previously, I firmly think, _extracting_ the relevant part 
from the factory bean into a factory would be the right way (and calling that 
from the factory bean). With this you introduce spring specific things directly 
to the Headless (which as far as I understand does not use Spring otherwise, 
but with this I might be wrong) and enforce Spring's API on this call. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (NIFI-9284) Reduce QuestDB Logging in Unit Tests

2021-10-06 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated NIFI-9284:
-
Fix Version/s: 1.15.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Reduce QuestDB Logging in Unit Tests
> 
>
> Key: NIFI-9284
> URL: https://issues.apache.org/jira/browse/NIFI-9284
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
> Fix For: 1.15.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Several test classes for QuestDB Status History in {{nifi-framework-core}} 
> produce large amounts of informational log messages during builds due to the 
> default logging configuration. The QuestDB logging configuration for tests 
> should be updated to avoid unnecessary log messages.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] pvillard31 commented on pull request #5441: NIFI-9284 Add QuestDB qlog.conf to test resources

2021-10-06 Thread GitBox


pvillard31 commented on pull request #5441:
URL: https://github.com/apache/nifi/pull/5441#issuecomment-935668114


   Merged, thanks @exceptionfactory 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-9284) Reduce QuestDB Logging in Unit Tests

2021-10-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17424821#comment-17424821
 ] 

ASF subversion and git services commented on NIFI-9284:
---

Commit fe423263350439971a75d519f66e278d1286bcc5 in nifi's branch 
refs/heads/main from David Handermann
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=fe42326 ]

NIFI-9284 Added QuestDB qlog.conf to test resources

- Set default logging level to ERROR to avoid excessive INFO messages

Signed-off-by: Pierre Villard 

This closes #5441.


> Reduce QuestDB Logging in Unit Tests
> 
>
> Key: NIFI-9284
> URL: https://issues.apache.org/jira/browse/NIFI-9284
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Tools and Build
>Reporter: David Handermann
>Assignee: David Handermann
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Several test classes for QuestDB Status History in {{nifi-framework-core}} 
> produce large amounts of informational log messages during builds due to the 
> default logging configuration. The QuestDB logging configuration for tests 
> should be updated to avoid unnecessary log messages.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [nifi] asfgit closed pull request #5441: NIFI-9284 Add QuestDB qlog.conf to test resources

2021-10-06 Thread GitBox


asfgit closed pull request #5441:
URL: https://github.com/apache/nifi/pull/5441


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org