[GitHub] [nifi] asfgit closed pull request #4296: NIFI-7211 Added @Ignore with warning message to a test that randomly …

2020-05-26 Thread GitBox


asfgit closed pull request #4296:
URL: https://github.com/apache/nifi/pull/4296


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] KuKuDeCheng commented on pull request #4284: [NIFI-7470] - ElasticsearchHttp malformed query when using Fields

2020-05-26 Thread GitBox


KuKuDeCheng commented on pull request #4284:
URL: https://github.com/apache/nifi/pull/4284#issuecomment-633627482







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] nielsbasjes commented on pull request #3734: NIFI-6666 Add Useragent Header to InvokeHTTP requests

2020-05-26 Thread GitBox


nielsbasjes commented on pull request #3734:
URL: https://github.com/apache/nifi/pull/3734#issuecomment-633956533







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen closed pull request #2085: NIFI-4246 - Client Credentials Grant based OAuth2 Controller Service

2020-05-26 Thread GitBox


MikeThomsen closed pull request #2085:
URL: https://github.com/apache/nifi/pull/2085


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] asfgit closed pull request #4276: NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT

2020-05-26 Thread GitBox


asfgit closed pull request #4276:
URL: https://github.com/apache/nifi/pull/4276


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #796: MINIFICPP-1239 Use except in gcc < 4.9

2020-05-26 Thread GitBox


arpadboda closed pull request #796:
URL: https://github.com/apache/nifi-minifi-cpp/pull/796


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen closed pull request #2901: NIFI-4246 - Client Credentials Grant based OAuth2 Controller Service

2020-05-26 Thread GitBox


MikeThomsen closed pull request #2901:
URL: https://github.com/apache/nifi/pull/2901


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen closed pull request #2541: Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory

2020-05-26 Thread GitBox


MikeThomsen closed pull request #2541:
URL: https://github.com/apache/nifi/pull/2541


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Woutifier commented on pull request #4233: NIFI-7393: add max idle time and max idle connections parameter to InvokeHTTP

2020-05-26 Thread GitBox


Woutifier commented on pull request #4233:
URL: https://github.com/apache/nifi/pull/4233#issuecomment-633860926


   > @Woutifier Other than my comment on validating that the time period is > 0 
the code looks good. Is there an easy way to test this (not necessarily a unit 
test)?
   
   Hey @jfrazee thanks for your review. I'll add the >0 check. I'll see if I 
can come up with some way to verify that this actually does something (I did 
test it locally against one of our servers but would be nice to have a small 
test scenario).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adam-markovics opened a new pull request #795: MINIFICPP-1236 - GetFile processor's \"Input Directory\" property sho…

2020-05-26 Thread GitBox


adam-markovics opened a new pull request #795:
URL: https://github.com/apache/nifi-minifi-cpp/pull/795


   …uldn't have default value
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [ ] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] JaeGeunBang edited a comment on pull request #4279: NIFI-7461: Fix image url in documentation.

2020-05-26 Thread GitBox


JaeGeunBang edited a comment on pull request #4279:
URL: https://github.com/apache/nifi/pull/4279#issuecomment-633971568







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen commented on pull request #3734: NIFI-6666 Add Useragent Header to InvokeHTTP requests

2020-05-26 Thread GitBox


MikeThomsen commented on pull request #3734:
URL: https://github.com/apache/nifi/pull/3734#issuecomment-634282237







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] KuKuDeCheng removed a comment on pull request #4284: [NIFI-7470] - ElasticsearchHttp malformed query when using Fields

2020-05-26 Thread GitBox


KuKuDeCheng removed a comment on pull request #4284:
URL: https://github.com/apache/nifi/pull/4284#issuecomment-633627482


   It seems like that Elasticsearch deprecate the `_source_include` and 
`_source_exclude` url parameters in favor of `_source_inclues` and 
`_source_excludes` in 7.7.0. Could you compatible with previous versions and 
add some unit tests?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] asfgit closed pull request #3822: NIFI-6785 Support Deflate Compression

2020-05-26 Thread GitBox


asfgit closed pull request #3822:
URL: https://github.com/apache/nifi/pull/3822


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] adarmiento opened a new pull request #4297: NIFI-7488 Listening Port property on HandleHttpRequest is not validated when Variable registry is used

2020-05-26 Thread GitBox


adarmiento opened a new pull request #4297:
URL: https://github.com/apache/nifi/pull/4297


    Description of PR
   
   HandleHttpRequest.PORT is validated
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [X] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi] nghiaxlee opened a new pull request #188: MINIFI-521 - Update template version for MINIFI toolkit

2020-05-26 Thread GitBox


nghiaxlee opened a new pull request #188:
URL: https://github.com/apache/nifi-minifi/pull/188


   Thank you for submitting a contribution to Apache NiFi - MiNiFi.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi-minifi folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](https://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under minifi-assembly?
   - [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under minifi-assembly?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] sjyang18 commented on a change in pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors

2020-05-26 Thread GitBox


sjyang18 commented on a change in pull request #4265:
URL: https://github.com/apache/nifi/pull/4265#discussion_r430548677



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/test/java/org/apache/nifi/services/azure/storage/TestAzureStorageCredentialsControllerServiceLookup.java
##
@@ -71,28 +73,32 @@ public void testLookupServiceA() {
 final AzureStorageCredentialsDetails storageCredentialsDetails = 
lookupService.getStorageCredentialsDetails(attributes);
 assertNotNull(storageCredentialsDetails);
 assertEquals("Account_A", 
storageCredentialsDetails.getStorageAccountName());
+assertEquals("accountsuffix.core.windows.net", 
storageCredentialsDetails.getStorageSuffix());
 }
 
 @Test
 public void testLookupServiceB() {

Review comment:
   null default is handled in SDK. So, the current behavior does not get 
affected.
   
   
   private static String getDNS(String service, String base) {
   if (base == null) {
   base = DEFAULT_DNS;
   }
   return String.format(DNS_NAME_FORMAT, service, base);
}
   

##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java
##
@@ -85,6 +86,20 @@
 .sensitive(true)
 .build();
 
+public static final PropertyDescriptor ENDPOINT_SUFFIX = new 
PropertyDescriptor.Builder()
+.name("storage-endpoint-suffix")
+.displayName("Common Storage Account Endpoint Suffix")
+.description(
+"Storage accounts in public Azure always use a common FQDN 
suffix. " +
+"Override this endpoint suffix with a different suffix in 
certain circumsances (like Azure Stack or non-public Azure regions). " +
+"The preferred way is to configure them through a 
controller service specified in the Storage Credentials property. " +
+"The controller service can provide a common/shared 
configuration for multiple/all Azure processors. Furthermore, the credentials " 
+
+"can also be looked up dynamically with the 'Lookup' 
version of the service.")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(false)

Review comment:
   Null default value is acceptable. The Azure Storage SDK sets the default 
end suffix, if we pass null. Do we still need a validator in this case? I 
validated the regression behavior without setting end suffix property.

##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java
##
@@ -85,6 +86,20 @@
 .sensitive(true)
 .build();
 
+public static final PropertyDescriptor ENDPOINT_SUFFIX = new 
PropertyDescriptor.Builder()
+.name("storage-endpoint-suffix")
+.displayName("Common Storage Account Endpoint Suffix")
+.description(
+"Storage accounts in public Azure always use a common FQDN 
suffix. " +
+"Override this endpoint suffix with a different suffix in 
certain circumsances (like Azure Stack or non-public Azure regions). " +
+"The preferred way is to configure them through a 
controller service specified in the Storage Credentials property. " +
+"The controller service can provide a common/shared 
configuration for multiple/all Azure processors. Furthermore, the credentials " 
+
+"can also be looked up dynamically with the 'Lookup' 
version of the service.")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(false)

Review comment:
   In order to support Azure  Stack on-premise, this should be an editable 
free form. 

##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java
##
@@ -85,6 +86,20 @@
 .sensitive(true)
 .build();
 
+public static final PropertyDescriptor ENDPOINT_SUFFIX = new 
PropertyDescriptor.Builder()
+.name("storage-endpoint-suffix")
+.displayName("Common Storage Account Endpoint Suffix")
+.description(
+"Storage accounts in public Azure always use a common FQDN 
suffix. " +
+"Override this endpoint suffix with a different suffix in 
certain circumsances (like Azure Stack or non-public Azure regions). " +
+"The preferred way is to configure them through a 
controller service specified in the Storage Credentials property. " +
+"The controller service can provide a common/shared 
configuration for multiple/all Azure processors. 

[GitHub] [nifi] turcsanyip commented on a change in pull request #4287: NIFI-7445: Add Conflict Resolution property to PutAzureDataLakeStorage processor

2020-05-26 Thread GitBox


turcsanyip commented on a change in pull request #4287:
URL: https://github.com/apache/nifi/pull/4287#discussion_r430546794



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/PutAzureDataLakeStorage.java
##
@@ -76,27 +109,45 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 final DataLakeServiceClient storageClient = 
getStorageClient(context, flowFile);
 final DataLakeFileSystemClient fileSystemClient = 
storageClient.getFileSystemClient(fileSystem);
 final DataLakeDirectoryClient directoryClient = 
fileSystemClient.getDirectoryClient(directory);
-final DataLakeFileClient fileClient = 
directoryClient.createFile(fileName);
+final DataLakeFileClient fileClient;
+
+final String conflictResolution = 
context.getProperty(CONFLICT_RESOLUTION).getValue();
+boolean overwrite = conflictResolution.equals(REPLACE_RESOLUTION);
+
+try {
+fileClient = directoryClient.createFile(fileName, overwrite);
+
+final long length = flowFile.getSize();
+if (length > 0) {
+try (final InputStream rawIn = session.read(flowFile); 
final BufferedInputStream in = new BufferedInputStream(rawIn)) {
+fileClient.append(in, 0, length);
+}
+}
+fileClient.flush(length);
+
+final Map attributes = new HashMap<>();
+attributes.put("azure.filesystem", fileSystem);
+attributes.put("azure.directory", directory);
+attributes.put("azure.filename", fileName);
+attributes.put("azure.primaryUri", fileClient.getFileUrl());
+attributes.put("azure.length", String.valueOf(length));
+flowFile = session.putAllAttributes(flowFile, attributes);
 
-final long length = flowFile.getSize();
-if (length > 0) {
-try (final InputStream rawIn = session.read(flowFile); final 
BufferedInputStream in = new BufferedInputStream(rawIn)) {
-fileClient.append(in, 0, length);
+session.transfer(flowFile, REL_SUCCESS);
+final long transferMillis = 
TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNanos);
+session.getProvenanceReporter().send(flowFile, 
fileClient.getFileUrl(), transferMillis);
+} catch (DataLakeStorageException dlsException) {
+if (dlsException.getStatusCode() == 409) {
+if (conflictResolution.equals(IGNORE_RESOLUTION)) {
+session.transfer(flowFile, REL_SUCCESS);
+getLogger().warn("Transferring {} to success because 
file with same name already exists", new Object[]{flowFile});

Review comment:
   The warning message does not properly describe the cause and the effect: 
file exists => transfer to success
   The reason for transferring to success is the 'Ignore' resolution policy 
rather.
   It should also be mentioned that the file has not been overwritten in Azure.
   

##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/test/java/org/apache/nifi/processors/azure/storage/ITPutAzureDataLakeStorage.java
##
@@ -253,6 +300,14 @@ private void assertFlowFile(String directory, String 
fileName, byte[] fileData)
 flowFile.assertAttributeEquals("azure.length", 
Integer.toString(fileData.length));
 }
 
+private void assertSimpleFlowFile(byte[] fileData) throws Exception {

Review comment:
   I could be called from `assertFlowFile` because the first section of 
that method is the same.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] davidvoit commented on a change in pull request #4035: NIFI-7097: ResultSetRecordSet: Always use RecordField from readerSchema if applicable.

2020-05-26 Thread GitBox


davidvoit commented on a change in pull request #4035:
URL: https://github.com/apache/nifi/pull/4035#discussion_r430248126



##
File path: 
nifi-commons/nifi-record/src/test/java/org/apache/nifi/serialization/record/TestResultSetRecordSet.java
##
@@ -0,0 +1,168 @@
+package org.apache.nifi.serialization.record;

Review comment:
   No code before the License header

##
File path: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/ResultSetRecordSet.java
##
@@ -153,25 +153,32 @@ private static RecordSchema createSchema(final ResultSet 
rs, final RecordSchema
 final int column = i + 1;
 final int sqlType = metadata.getColumnType(column);
 
-final DataType dataType = getDataType(sqlType, rs, column, 
readerSchema);
 final String fieldName = metadata.getColumnLabel(column);
+Optional readerField = readerSchema == null ? 
Optional.empty() : readerSchema.getField(fieldName);

Review comment:
   I'm not too sure if the readerField Optinoal should be inlined in the if 
block, as it is only used in the ture case. But as it checks readerSchema 
available and field available this could be fine.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #789: MINIFICPP-1203 - Enable header linting in include directories and resolve linter recommendations

2020-05-26 Thread GitBox


arpadboda commented on a change in pull request #789:
URL: https://github.com/apache/nifi-minifi-cpp/pull/789#discussion_r430369684



##
File path: extensions/standard-processors/CPPLINT.cfg
##
@@ -1,2 +0,0 @@
-filter=-build/include_alpha
-

Review comment:
   Do we need the empty file here? 

##
File path: generateVersion.sh
##
@@ -37,8 +37,25 @@ IFS=';' read -r -a extensions_array <<< "$extensions"
 extension_list="${extension_list} } "
 
 cat >"$out_dir/agent_version.h" < logger_;
 };
 
-}
+}  // namespace core

Review comment:
   Inconsistent comment format, especially by leaving the previous one 
below. 

##
File path: libminifi/include/c2/ControllerSocketProtocol.h
##
@@ -79,16 +81,15 @@ class ControllerSocketProtocol : public HeartBeatReporter {
   std::shared_ptr stream_factory_;
 
  private:
-
   std::shared_ptr logger_;
 };
 
 REGISTER_RESOURCE(ControllerSocketProtocol, "Creates a reporter that can 
handle basic c2 operations for a localized environment through a simple TCP 
socket.");
 
-} /* namesapce c2 */
+}  // namespace c2

Review comment:
   Inconsistent comment format

##
File path: libminifi/include/c2/PayloadParser.h
##
@@ -186,10 +185,10 @@ class PayloadParser {
   std::string component_to_get_;
 };
 
-} /* namesapce c2 */
+}  // namespace c2

Review comment:
   Inconsistent comment format

##
File path: libminifi/include/c2/protocols/RESTProtocol.h
##
@@ -100,10 +98,10 @@ class RESTProtocol {
   std::map nested_payloads_;
 };
 
-} /* namesapce c2 */
+}  // namespace c2

Review comment:
   Inconsistent comment format

##
File path: libminifi/include/core/ContentRepository.h
##
@@ -104,12 +106,11 @@ class ContentRepository : public 
StreamManager {
 if (count != count_map_.end() && count->second > 0) {
   count_map_[str] = count->second - 1;
 } else {
-   count_map_.erase(str);
+  count_map_.erase(str);

Review comment:
   Changing the tab to spaces is good, but please change to the correct 
amount. 

##
File path: libminifi/include/c2/C2Protocol.h
##
@@ -102,18 +101,17 @@ class C2Protocol : public core::Connectable {
   }
 
  protected:
-
   std::atomic running_;
 
   std::shared_ptr controller_;
 
   std::shared_ptr configuration_;
 };
 
-} /* namesapce c2 */
+}  // namespace c2

Review comment:
   Same inconsistency here. 

##
File path: libminifi/include/c2/C2Payload.h
##
@@ -189,10 +191,10 @@ class C2Payload : public state::Update {
   bool is_collapsible_{ true };
 };
 
-} /* namesapce c2 */
+}  // namespace c2

Review comment:
   What's the motivation behind this change?
   This just looks more inconsistent now. 

##
File path: extensions/standard-processors/processors/ListenSyslog.h
##
@@ -200,20 +207,20 @@ class ListenSyslog : public core::Processor {
   int64_t _maxBatchSize;
   std::string _messageDelimiter;
   std::string _protocol;
-  int64_t _port;bool _parseMessages;
+  int64_t _port; bool _parseMessages;

Review comment:
   That's true, but nah, in case we touch it, let's make it nice, not just 
"pass linter"!

##
File path: libminifi/include/c2/HeartBeatReporter.h
##
@@ -89,18 +90,17 @@ class HeartBeatReporter : public core::Connectable {
   }
 
  protected:
-
   std::shared_ptr controller_;
 
   std::shared_ptr update_sink_;
 
   std::shared_ptr configuration_;
 };
 
-} /* namesapce c2 */
+}  // namespace c2

Review comment:
   Inconsistent comment format

##
File path: libminifi/include/c2/C2Trigger.h
##
@@ -73,10 +74,10 @@ class C2Trigger : public core::Connectable{
   virtual C2Payload getAction() = 0;
 };
 
-} /* namesapce c2 */
+}  // namespace c2

Review comment:
   Inconsistent comment format

##
File path: libminifi/include/c2/protocols/RESTProtocol.h
##
@@ -18,28 +18,29 @@
 #ifndef LIBMINIFI_INCLUDE_C2_PROTOCOLS_RESTPROTOCOL_H_
 #define LIBMINIFI_INCLUDE_C2_PROTOCOLS_RESTPROTOCOL_H_
 
-#include 
+#include  // NOLINT
+#include  // NOLINT
 
 #ifdef RAPIDJSON_ASSERT
 #undef RAPIDJSON_ASSERT
 #endif
 #define RAPIDJSON_ASSERT(x) if(!(x)) throw std::logic_error("rapidjson 
exception"); //NOLINT
 
+#include  // NOLINT
+#include  // NOLINT

Review comment:
   What kind of linter error would it generate?
   Why do we have the rapidjson assert defined in between std includes?

##
File path: libminifi/include/controllers/SSLContextService.h
##
@@ -17,9 +17,11 @@
  */
 #ifndef LIBMINIFI_INCLUDE_CONTROLLERS_SSLCONTEXTSERVICE_H_
 #define LIBMINIFI_INCLUDE_CONTROLLERS_SSLCONTEXTSERVICE_H_
+
+#include 

Review comment:
   This should be together with the other std includes (iostream, memory)

##
File path: libminifi/include/core/ProcessSession.h
##
@@ -49,12 +51,12 @@ class ProcessSession : public ReferenceContainer {
   /*!
* Create a new process session
*/
-  ProcessSession(st

[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #794: MINIFICPP-1232 - PublishKafka processor doesn't validate some properties

2020-05-26 Thread GitBox


hunyadi-dev commented on a change in pull request #794:
URL: https://github.com/apache/nifi-minifi-cpp/pull/794#discussion_r430360298



##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -82,10 +82,33 @@ core::Property PublishKafka::TargetBatchPayloadSize(
 core::PropertyBuilder::createProperty("Target Batch Payload 
Size")->withDescription("The target total payload size for a batch. 0 B means 
unlimited (Batch Size is still applied).")
 ->isRequired(false)->withDefaultValue("512 
KB")->build());
 core::Property PublishKafka::AttributeNameRegex("Attributes to Send as 
Headers", "Any attribute whose name matches the regex will be added to the 
Kafka messages as a Header", "");
-core::Property PublishKafka::QueueBufferMaxTime("Queue Buffering Max Time", 
"Delay to wait for messages in the producer queue to accumulate before 
constructing message batches", "");
-core::Property PublishKafka::QueueBufferMaxSize("Queue Max Buffer Size", 
"Maximum total message size sum allowed on the producer queue", "");
-core::Property PublishKafka::QueueBufferMaxMessage("Queue Max Message", 
"Maximum number of messages allowed on the producer queue", "");
-core::Property PublishKafka::CompressCodec("Compress Codec", "compression 
codec to use for compressing message sets", COMPRESSION_CODEC_NONE);
+
+const core::Property PublishKafka::QueueBufferMaxTime(

Review comment:
   I like it this way, so 2/2 :)

##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -82,10 +82,33 @@ core::Property PublishKafka::TargetBatchPayloadSize(
 core::PropertyBuilder::createProperty("Target Batch Payload 
Size")->withDescription("The target total payload size for a batch. 0 B means 
unlimited (Batch Size is still applied).")
 ->isRequired(false)->withDefaultValue("512 
KB")->build());
 core::Property PublishKafka::AttributeNameRegex("Attributes to Send as 
Headers", "Any attribute whose name matches the regex will be added to the 
Kafka messages as a Header", "");
-core::Property PublishKafka::QueueBufferMaxTime("Queue Buffering Max Time", 
"Delay to wait for messages in the producer queue to accumulate before 
constructing message batches", "");
-core::Property PublishKafka::QueueBufferMaxSize("Queue Max Buffer Size", 
"Maximum total message size sum allowed on the producer queue", "");
-core::Property PublishKafka::QueueBufferMaxMessage("Queue Max Message", 
"Maximum number of messages allowed on the producer queue", "");
-core::Property PublishKafka::CompressCodec("Compress Codec", "compression 
codec to use for compressing message sets", COMPRESSION_CODEC_NONE);
+
+const core::Property PublishKafka::QueueBufferMaxTime(

Review comment:
   I like it this way too, so 2/2 :)

##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -82,10 +82,33 @@ core::Property PublishKafka::TargetBatchPayloadSize(
 core::PropertyBuilder::createProperty("Target Batch Payload 
Size")->withDescription("The target total payload size for a batch. 0 B means 
unlimited (Batch Size is still applied).")
 ->isRequired(false)->withDefaultValue("512 
KB")->build());
 core::Property PublishKafka::AttributeNameRegex("Attributes to Send as 
Headers", "Any attribute whose name matches the regex will be added to the 
Kafka messages as a Header", "");
-core::Property PublishKafka::QueueBufferMaxTime("Queue Buffering Max Time", 
"Delay to wait for messages in the producer queue to accumulate before 
constructing message batches", "");
-core::Property PublishKafka::QueueBufferMaxSize("Queue Max Buffer Size", 
"Maximum total message size sum allowed on the producer queue", "");
-core::Property PublishKafka::QueueBufferMaxMessage("Queue Max Message", 
"Maximum number of messages allowed on the producer queue", "");
-core::Property PublishKafka::CompressCodec("Compress Codec", "compression 
codec to use for compressing message sets", COMPRESSION_CODEC_NONE);
+
+const core::Property PublishKafka::QueueBufferMaxTime(

Review comment:
   I like it this way too, so 2/2. Arpad, are you sure this is not accepted 
by the linter? I see no warnings when trying to add these lines to a linted 
file.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] JaeGeunBang commented on pull request #4279: NIFI-7461: Fix image url in documentation.

2020-05-26 Thread GitBox


JaeGeunBang commented on pull request #4279:
URL: https://github.com/apache/nifi/pull/4279#issuecomment-633971568


   It happened when I looked at the documentation in git.
   In the link below, the actual image is not visible, and if you click on the 
image, "Page Not Found" appears.
   
https://github.com/apache/nifi/blob/master/nifi-docs/src/main/asciidoc/overview.adoc
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support

2020-05-26 Thread GitBox


hunyadi-dev commented on a change in pull request #784:
URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r430365464



##
File path: extensions/script/python/ExecutePythonProcessor.cpp
##
@@ -46,144 +50,177 @@ core::Relationship 
ExecutePythonProcessor::Failure("failure", "Script failures")
 void ExecutePythonProcessor::initialize() {
   // initialization requires that we do a little leg work prior to onSchedule
   // so that we can provide manifest our processor identity
-  std::set properties;
-
-  std::string prop;
-  getProperty(ScriptFile.getName(), prop);
-
-  properties.insert(ScriptFile);
-  properties.insert(ModuleDirectory);
-  setSupportedProperties(properties);
-
-  std::set relationships;
-  relationships.insert(Success);
-  relationships.insert(Failure);
-  setSupportedRelationships(std::move(relationships));
-  setAcceptAllProperties();
-  if (!prop.empty()) {
-setProperty(ScriptFile, prop);
-std::shared_ptr engine;
-python_logger_ = 
logging::LoggerFactory::getAliasedLogger(getName());
+  if (getProperties().empty()) {
+setSupportedProperties({
+  ScriptFile,
+  ScriptBody,
+  ModuleDirectory
+});
+setAcceptAllProperties();
+setSupportedRelationships({
+  Success,
+  Failure
+});
+valid_init_ = false;
+return;
+  }
 
-engine = createEngine();
+  python_logger_ = 
logging::LoggerFactory::getAliasedLogger(getName());
 
-if (engine == nullptr) {
-  throw std::runtime_error("No script engine available");
-}
+  getProperty(ModuleDirectory.getName(), module_directory_);
 
-try {
-  engine->evalFile(prop);
-  auto me = shared_from_this();
-  triggerDescribe(engine, me);
-  triggerInitialize(engine, me);
+  valid_init_ = false;
+  appendPathForImportModules();
+  loadScript();
+  try {
+if ("" != script_to_exec_) {

Review comment:
   I think this is more expressive, as it shows we are handling an 
`std::string` as opposed to `std::vector`. Also even on reading 
this comment I missed the bang on the first option, and `not` is warned on 
(unreliably though) by the linter.

##
File path: extensions/script/python/ExecutePythonProcessor.cpp
##
@@ -46,144 +50,177 @@ core::Relationship 
ExecutePythonProcessor::Failure("failure", "Script failures")
 void ExecutePythonProcessor::initialize() {
   // initialization requires that we do a little leg work prior to onSchedule
   // so that we can provide manifest our processor identity
-  std::set properties;
-
-  std::string prop;
-  getProperty(ScriptFile.getName(), prop);
-
-  properties.insert(ScriptFile);
-  properties.insert(ModuleDirectory);
-  setSupportedProperties(properties);
-
-  std::set relationships;
-  relationships.insert(Success);
-  relationships.insert(Failure);
-  setSupportedRelationships(std::move(relationships));
-  setAcceptAllProperties();
-  if (!prop.empty()) {
-setProperty(ScriptFile, prop);
-std::shared_ptr engine;
-python_logger_ = 
logging::LoggerFactory::getAliasedLogger(getName());
+  if (getProperties().empty()) {
+setSupportedProperties({
+  ScriptFile,
+  ScriptBody,
+  ModuleDirectory
+});
+setAcceptAllProperties();
+setSupportedRelationships({
+  Success,
+  Failure
+});
+valid_init_ = false;
+return;
+  }
 
-engine = createEngine();
+  python_logger_ = 
logging::LoggerFactory::getAliasedLogger(getName());
 
-if (engine == nullptr) {
-  throw std::runtime_error("No script engine available");
-}
+  getProperty(ModuleDirectory.getName(), module_directory_);
 
-try {
-  engine->evalFile(prop);
-  auto me = shared_from_this();
-  triggerDescribe(engine, me);
-  triggerInitialize(engine, me);
+  valid_init_ = false;
+  appendPathForImportModules();
+  loadScript();
+  try {
+if ("" != script_to_exec_) {
+  std::shared_ptr engine = getScriptEngine();
+  engine->eval(script_to_exec_);
+  auto shared_this = shared_from_this();
+  engine->describe(shared_this);
+  engine->onInitialize(shared_this);
+  handleEngineNoLongerInUse(std::move(engine));
   valid_init_ = true;
-} catch (std::exception &exception) {
-  logger_->log_error("Caught Exception %s", exception.what());
-  engine = nullptr;
-  std::rethrow_exception(std::current_exception());
-  valid_init_ = false;
-} catch (...) {
-  logger_->log_error("Caught Exception");
-  engine = nullptr;
-  std::rethrow_exception(std::current_exception());
-  valid_init_ = false;
 }
-
+  }
+  catch (const std::exception& exception) {
+logger_->log_error("Caught Exception: %s", exception.what());
+std::rethrow_exception(std::current_exception());
+  }
+  catch (...) {
+logger_->log_error("Caught Exception");
+std::rethrow_exception(std::current_exception());
   }
 }
 
 void ExecutePythonProcessor::onSchedule(const 
std::shared_ptr &context, const 
std::s

[GitHub] [nifi] MikeThomsen commented on pull request #2085: NIFI-4246 - Client Credentials Grant based OAuth2 Controller Service

2020-05-26 Thread GitBox


MikeThomsen commented on pull request #2085:
URL: https://github.com/apache/nifi/pull/2085#issuecomment-633742354







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] adarmiento commented on a change in pull request #4297: NIFI-7488 Listening Port property on HandleHttpRequest is not validated when Variable registry is used

2020-05-26 Thread GitBox


adarmiento commented on a change in pull request #4297:
URL: https://github.com/apache/nifi/pull/4297#discussion_r430201564



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/HandleHttpRequest.java
##
@@ -321,6 +326,24 @@
 return Collections.singleton(REL_SUCCESS);
 }
 
+@Override
+protected Collection customValidate(final 
ValidationContext validationContext) {
+final List results = new ArrayList<>();
+
+final Long port = 
validationContext.getProperty(PORT).evaluateAttributeExpressions().asLong();

Review comment:
   I made a unit test using invalid values (passing from the registry), 
calling the `.assertValid()` was not marking the processor as invalid
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #795: MINIFICPP-1236 - GetFile processor's \"Input Directory\" property sho…

2020-05-26 Thread GitBox


arpadboda commented on a change in pull request #795:
URL: https://github.com/apache/nifi-minifi-cpp/pull/795#discussion_r430358167



##
File path: extensions/standard-processors/processors/GetFile.cpp
##
@@ -56,7 +56,7 @@ core::Property GetFile::BatchSize(
 core::PropertyBuilder::createProperty("Batch Size")->withDescription("The 
maximum number of files to pull in each 
iteration")->withDefaultValue(10)->build());
 
 core::Property GetFile::Directory(
-core::PropertyBuilder::createProperty("Input 
Directory")->withDescription("The input directory from which to pull 
files")->isRequired(true)->supportsExpressionLanguage(true)->withDefaultValue(".")
+core::PropertyBuilder::createProperty("Input 
Directory")->withDescription("The input directory from which to pull 
files")->isRequired(true)->supportsExpressionLanguage(true)

Review comment:
   Could you add a testcase to that covers this? (The processor doesn't 
start without setting the input dir)

##
File path: extensions/standard-processors/processors/GetFile.cpp
##
@@ -56,7 +56,7 @@ core::Property GetFile::BatchSize(
 core::PropertyBuilder::createProperty("Batch Size")->withDescription("The 
maximum number of files to pull in each 
iteration")->withDefaultValue(10)->build());
 
 core::Property GetFile::Directory(
-core::PropertyBuilder::createProperty("Input 
Directory")->withDescription("The input directory from which to pull 
files")->isRequired(true)->supportsExpressionLanguage(true)->withDefaultValue(".")
+core::PropertyBuilder::createProperty("Input 
Directory")->withDescription("The input directory from which to pull 
files")->isRequired(true)->supportsExpressionLanguage(true)

Review comment:
   Could you add a testcase that covers this? (The processor doesn't start 
without setting the input dir)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] Xyrodileas commented on pull request #2901: NIFI-4246 - Client Credentials Grant based OAuth2 Controller Service

2020-05-26 Thread GitBox


Xyrodileas commented on pull request #2901:
URL: https://github.com/apache/nifi/pull/2901#issuecomment-633983222


   What is the security reason for not integrating this PR ? What could be done 
to integrate this feature ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pgyori commented on pull request #4287: NIFI-7445: Add Conflict Resolution property to PutAzureDataLakeStorage processor

2020-05-26 Thread GitBox


pgyori commented on pull request #4287:
URL: https://github.com/apache/nifi/pull/4287#issuecomment-634140865


   Thank you @turcsanyip ! Fixed your findings in the next commit.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi] nghiaxlee opened a new pull request #187: MINIFI-492 - Generic reporting tasks

2020-05-26 Thread GitBox


nghiaxlee opened a new pull request #187:
URL: https://github.com/apache/nifi-minifi/pull/187


   Thank you for submitting a contribution to Apache NiFi - MiNiFi.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [ ] Does your PR title start with MINIFI- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi-minifi folder?
   - [ ] Have you written or updated unit tests to verify your changes?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](https://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under minifi-assembly?
   - [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under minifi-assembly?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #789: MINIFICPP-1203 - Enable header linting in include directories and resolve linter recommendations

2020-05-26 Thread GitBox


hunyadi-dev commented on a change in pull request #789:
URL: https://github.com/apache/nifi-minifi-cpp/pull/789#discussion_r430257387



##
File path: extensions/standard-processors/processors/GenerateFlowFile.h
##
@@ -17,8 +17,16 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-#ifndef __GENERATE_FLOW_FILE_H__
-#define __GENERATE_FLOW_FILE_H__
+#ifndef EXTENSIONS_STANDARD_PROCESSORS_PROCESSORS_GENERATEFLOWFILE_H_
+#define EXTENSIONS_STANDARD_PROCESSORS_PROCESSORS_GENERATEFLOWFILE_H_
+
+#include 
+
+#include 
+
+#include 
+
+#include 

Review comment:
   Yes, I did not consider this when writing the script adding missing 
headers. Will write a new one that corrects for these cases.

##
File path: extensions/standard-processors/processors/ListenSyslog.h
##
@@ -200,20 +207,20 @@ class ListenSyslog : public core::Processor {
   int64_t _maxBatchSize;
   std::string _messageDelimiter;
   std::string _protocol;
-  int64_t _port;bool _parseMessages;
+  int64_t _port; bool _parseMessages;

Review comment:
   I agree, but this is not related to linter recommendations.

##
File path: extensions/standard-processors/processors/GenerateFlowFile.h
##
@@ -17,8 +17,16 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-#ifndef __GENERATE_FLOW_FILE_H__
-#define __GENERATE_FLOW_FILE_H__
+#ifndef EXTENSIONS_STANDARD_PROCESSORS_PROCESSORS_GENERATEFLOWFILE_H_
+#define EXTENSIONS_STANDARD_PROCESSORS_PROCESSORS_GENERATEFLOWFILE_H_
+
+#include 
+
+#include 
+
+#include 
+
+#include 

Review comment:
   Added a commit fixing these kinds of errors.

##
File path: extensions/standard-processors/CPPLINT.cfg
##
@@ -1,2 +0,0 @@
-filter=-build/include_alpha
-

Review comment:
   It should be shown as deleted :S
   https://user-images.githubusercontent.com/64011968/82909891-28ea1280-9f6a-11ea-87b6-a6ffc0f24c85.png";>
   

##
File path: generateVersion.sh
##
@@ -37,8 +37,25 @@ IFS=';' read -r -a extensions_array <<< "$extensions"
 extension_list="${extension_list} } "
 
 cat >"$out_dir/agent_version.h" < running_;
 
   std::shared_ptr controller_;
 
   std::shared_ptr configuration_;
 };
 
-} /* namesapce c2 */
+}  // namespace c2

Review comment:
   See comment above.

##
File path: libminifi/include/core/Processor.h
##
@@ -312,11 +308,11 @@ class Processor : public Connectable, public 
ConfigurableComponent, public std::
   std::shared_ptr logger_;
 };
 
-}
+}  // namespace core

Review comment:
   See commment above.

##
File path: libminifi/include/core/state/Value.h
##
@@ -501,11 +496,11 @@ struct SerializedResponseNode {
   SerializedResponseNode &operator=(const SerializedResponseNode &other) = 
default;
 };
 
-} /* namespace metrics */
+}  // namespace response

Review comment:
   See commment above.

##
File path: libminifi/include/c2/C2Payload.h
##
@@ -189,10 +191,10 @@ class C2Payload : public state::Update {
   bool is_collapsible_{ true };
 };
 
-} /* namesapce c2 */
+}  // namespace c2

Review comment:
   Updated.

##
File path: libminifi/include/c2/protocols/RESTProtocol.h
##
@@ -18,28 +18,29 @@
 #ifndef LIBMINIFI_INCLUDE_C2_PROTOCOLS_RESTPROTOCOL_H_
 #define LIBMINIFI_INCLUDE_C2_PROTOCOLS_RESTPROTOCOL_H_
 
-#include 
+#include  // NOLINT
+#include  // NOLINT
 
 #ifdef RAPIDJSON_ASSERT
 #undef RAPIDJSON_ASSERT
 #endif
 #define RAPIDJSON_ASSERT(x) if(!(x)) throw std::logic_error("rapidjson 
exception"); //NOLINT
 
+#include  // NOLINT
+#include  // NOLINT

Review comment:
   I don't know, but it fails on windows otherwise.

##
File path: libminifi/include/c2/protocols/RESTProtocol.h
##
@@ -18,28 +18,29 @@
 #ifndef LIBMINIFI_INCLUDE_C2_PROTOCOLS_RESTPROTOCOL_H_
 #define LIBMINIFI_INCLUDE_C2_PROTOCOLS_RESTPROTOCOL_H_
 
-#include 
+#include  // NOLINT
+#include  // NOLINT
 
 #ifdef RAPIDJSON_ASSERT
 #undef RAPIDJSON_ASSERT
 #endif
 #define RAPIDJSON_ASSERT(x) if(!(x)) throw std::logic_error("rapidjson 
exception"); //NOLINT
 
+#include  // NOLINT
+#include  // NOLINT

Review comment:
   I don't know, but it fails on windows otherwise. This order is the 
result of careful threading and experimentation. As I don't enjoy working on 
Windows specific stuffs, I leave the joy of figuring this out to others :D

##
File path: libminifi/include/processors/ProcessorUtils.h
##
@@ -1,9 +1,28 @@
-#include 
-#include 
-#include 
+/**

Review comment:
   Resolved by adding reaction to comment.

##
File path: libminifi/include/controllers/SSLContextService.h
##
@@ -17,9 +17,11 @@
  */
 #ifndef LIBMINIFI_INCLUDE_CONTROLLERS_SSLCONTEXTSERVICE_H_
 #define LIBMINIFI_INCLUDE_CONTROLLERS_SSLCONTEXTSERVICE_H_
+
+#include 

Review comment:
   Moved there.

##
File path: libmin

[GitHub] [nifi] granthenke commented on pull request #4276: NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT

2020-05-26 Thread GitBox


granthenke commented on pull request #4276:
URL: https://github.com/apache/nifi/pull/4276#issuecomment-634097614


   +1 LGTM. I also checked out the PR and ran `mvn verify -Pintegration-tests` 
to validate the integration test still passes. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] jdye64 commented on pull request #2085: NIFI-4246 - Client Credentials Grant based OAuth2 Controller Service

2020-05-26 Thread GitBox


jdye64 commented on pull request #2085:
URL: https://github.com/apache/nifi/pull/2085#issuecomment-633743619







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] MikeThomsen commented on pull request #2901: NIFI-4246 - Client Credentials Grant based OAuth2 Controller Service

2020-05-26 Thread GitBox


MikeThomsen commented on pull request #2901:
URL: https://github.com/apache/nifi/pull/2901#issuecomment-633731476


   Was a really a good PR, but unfortunately Oltu is in the Apache Attic so I 
don't think we can safely use it at this point for security reasons. Going to 
close now because of that. If any other committer or PMC member disagrees, feel 
free to reopen.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #792: MINIFICPP-1230 - Enable on Win and refactor MergeFileTests

2020-05-26 Thread GitBox


arpadboda closed pull request #792:
URL: https://github.com/apache/nifi-minifi-cpp/pull/792


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] fgerlits opened a new pull request #796: MINIFICPP-1239 Use except in gcc < 4.9

2020-05-26 Thread GitBox


fgerlits opened a new pull request #796:
URL: https://github.com/apache/nifi-minifi-cpp/pull/796


   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   ### Description of PR
   
   We should use the regular expression implementation from the STL whenever we 
can, which means for all compilers except gcc versions before 4.9.
   
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [x] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?
   
   - [x] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the LICENSE file?
   - [ ] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] davidvoit commented on pull request #4036: NIFI-7099: AbstractDatabaseLookupService: Add new option to use a Select Statement alternatively to table name and key column

2020-05-26 Thread GitBox


davidvoit commented on pull request #4036:
URL: https://github.com/apache/nifi/pull/4036#issuecomment-633897227


   Instead of patching SimpleDatabaseLookupService maybe create an own 
CustomSQLLookupService?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] phrocker commented on pull request #781: MINIFICPP-1214: Converts H2O Processors to use ALv2 compliant H20-3 library

2020-05-26 Thread GitBox


phrocker commented on pull request #781:
URL: https://github.com/apache/nifi-minifi-cpp/pull/781#issuecomment-633966448


   > 
   > 
   > @phrocker @szaszm I created a Jira Ticket for creating regression tests 
for the new H2O Processors using pytest-mock framework: 
https://issues.apache.org/jira/browse/MINIFICPP-1233
   > 
   > I will start working on regression tests for the following two processors:
   > 
   > * **ExecuteH2oMojoScoring.py**
   > 
   > * **ConvertDsToCsv.py**
   > 
   > 
   > How should I approach integration tests for 
[#784](https://github.com/apache/nifi-minifi-cpp/pull/784) and these two new 
H2O Processors? It sounds like @phrocker already tested integration?
   
   James I tested the processors manually. In regards to testing these 
processors I'd focus more on validating your logic and if there are any 
external services mock those out. I'll take a look at @szaszm's comments in the 
ticket.
   
If you can create some tests we can create a way to run them. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] tpalfy commented on pull request #4276: NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT

2020-05-26 Thread GitBox


tpalfy commented on pull request #4276:
URL: https://github.com/apache/nifi/pull/4276#issuecomment-633622977


   @granthenke @turcsanyip 
   Thanks for the feedback!
   I added a change for safely closing the old client when creating a new one.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] jfrazee commented on a change in pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors

2020-05-26 Thread GitBox


jfrazee commented on a change in pull request #4265:
URL: https://github.com/apache/nifi/pull/4265#discussion_r430022873



##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java
##
@@ -85,6 +85,22 @@
 .sensitive(true)
 .build();
 
+public static final PropertyDescriptor ENDPOINT_SUFFIX = new 
PropertyDescriptor.Builder()
+.name("storage-endpoint-suffix")
+.displayName("Storage Endpoint Suffix")
+.description(
+"Storage accounts in public Azure always use a common FQDN 
suffix. " +
+"Override this endpoint suffix with a different suffix in 
certain circumsances (like Azure Stack or non-public Azure regions). " +
+"The preferred way is to configure them through a 
controller service specified in the Storage Credentials property. " +
+"The controller service can provide a common/shared 
configuration for multiple/all Azure processors. Furthermore, the credentials " 
+
+"can also be looked up dynamically with the 'Lookup' 
version of the service.")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)

Review comment:
   I think the precedent for these is to use a dropdown 
(`allowableValues()`) and there is a list of these in 
[`client-runtime`](https://github.com/Azure/autorest-clientruntime-for-java/blob/23e3142db1f7fbcd8a871ed791c1a806285ee81c/azure-client-runtime/src/main/java/com/microsoft/azure/AzureEnvironment.java#L144),
 which we already pull in, but it doesn't include Azure Stack so we'd need to 
add that. Are there any (un)expected drawbacks to doing this?

##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java
##
@@ -85,6 +86,20 @@
 .sensitive(true)
 .build();
 
+public static final PropertyDescriptor ENDPOINT_SUFFIX = new 
PropertyDescriptor.Builder()
+.name("storage-endpoint-suffix")
+.displayName("Common Storage Account Endpoint Suffix")
+.description(
+"Storage accounts in public Azure always use a common FQDN 
suffix. " +
+"Override this endpoint suffix with a different suffix in 
certain circumsances (like Azure Stack or non-public Azure regions). " +
+"The preferred way is to configure them through a 
controller service specified in the Storage Credentials property. " +
+"The controller service can provide a common/shared 
configuration for multiple/all Azure processors. Furthermore, the credentials " 
+
+"can also be looked up dynamically with the 'Lookup' 
version of the service.")
+
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
+.required(false)

Review comment:
   If we don't use `allowableValues()` this will need a validator. 

##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/test/java/org/apache/nifi/processors/azure/storage/queue/GetAzureQueueStorageIT.java
##
@@ -39,6 +39,7 @@ public void setUp() throws StorageException {
 cloudQueue.addMessage(new CloudQueueMessage("Dummy Message 1"), 
604800, 0, null, null);
 cloudQueue.addMessage(new CloudQueueMessage("Dummy Message 2"), 
604800, 0, null, null);
 cloudQueue.addMessage(new CloudQueueMessage("Dummy Message 3"), 
604800, 0, null, null);
+

Review comment:
   Can you remove this since nothing else was changed in this file?

##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-services-api/src/main/java/org/apache/nifi/services/azure/storage/AzureStorageCredentialsDetails.java
##
@@ -22,17 +22,24 @@
 
 private final String storageAccountName;
 
+private final String storageSuffix;
+
 private final StorageCredentials storageCredentials;
 
-public AzureStorageCredentialsDetails(String storageAccountName, 
StorageCredentials storageCredentials) {
+public AzureStorageCredentialsDetails(String storageAccountName, String 
storageSuffix, StorageCredentials storageCredentials) {

Review comment:
   Since this is in an API NAR and it's a public constructor we have to 
leave the old one too. I'd suggest marking the two parameter one `@Deprecated` 
and having it call the new one with `.blob.core.windows.net` so it has the same 
behavior.

##
File path: 
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-services-api/src/main/java/org/apache/nifi/services/azure/storage/AzureStorageCredentialsDetails.java
##
@@ -22,17 +22,24 @@
 
 private final String storageAccountName;
 
+private final String storageSuffix;
+
 private final StorageCredentials storageCredentials;
 
-public Azur

[GitHub] [nifi] MikeThomsen commented on a change in pull request #4297: NIFI-7488 Listening Port property on HandleHttpRequest is not validated when Variable registry is used

2020-05-26 Thread GitBox


MikeThomsen commented on a change in pull request #4297:
URL: https://github.com/apache/nifi/pull/4297#discussion_r430081528



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/HandleHttpRequest.java
##
@@ -321,6 +326,24 @@
 return Collections.singleton(REL_SUCCESS);
 }
 
+@Override
+protected Collection customValidate(final 
ValidationContext validationContext) {
+final List results = new ArrayList<>();
+
+final Long port = 
validationContext.getProperty(PORT).evaluateAttributeExpressions().asLong();

Review comment:
   Why are you doing this? Is the `addValidator` call not validating items 
from the variable registry?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda closed pull request #794: MINIFICPP-1232 - PublishKafka processor doesn't validate some properties

2020-05-26 Thread GitBox


arpadboda closed pull request #794:
URL: https://github.com/apache/nifi-minifi-cpp/pull/794


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] phrocker edited a comment on pull request #781: MINIFICPP-1214: Converts H2O Processors to use ALv2 compliant H20-3 library

2020-05-26 Thread GitBox


phrocker edited a comment on pull request #781:
URL: https://github.com/apache/nifi-minifi-cpp/pull/781#issuecomment-633966448







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] pvillard31 commented on a change in pull request #4297: NIFI-7488 Listening Port property on HandleHttpRequest is not validated when Variable registry is used

2020-05-26 Thread GitBox


pvillard31 commented on a change in pull request #4297:
URL: https://github.com/apache/nifi/pull/4297#discussion_r430307162



##
File path: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/HandleHttpRequest.java
##
@@ -321,6 +326,24 @@
 return Collections.singleton(REL_SUCCESS);
 }
 
+@Override
+protected Collection customValidate(final 
ValidationContext validationContext) {
+final List results = new ArrayList<>();
+
+final Long port = 
validationContext.getProperty(PORT).evaluateAttributeExpressions().asLong();

Review comment:
   The validators, most of the time, won't do anything if expression 
language is used to configure the property. I guess we could improve the 
validators based on the EL scope of the property: if the scope is Variable 
Registry, we should be able to validate the property even if EL is used.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] adarmiento opened a new pull request #4298: NIFI-7486 Make InvokeHttp authentication properties able to read from variables.

2020-05-26 Thread GitBox


adarmiento opened a new pull request #4298:
URL: https://github.com/apache/nifi/pull/4298


    Description of PR
   
   * InvokeHTTP Basic HTTP credentials support variable registry
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [X] Is there a JIRA ticket associated with this PR? Is it referenced 
in the commit message?
   
   - [X] Does your PR title start with **NIFI-** where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [X] Has your PR been rebased against the latest commit within the target 
branch (typically `master`)?
   
   - [X] Is your initial contribution a single, squashed commit? _Additional 
commits in response to PR reviewer feedback should be made on this branch and 
pushed to allow change tracking. Do not `squash` or use `--force` when pushing 
to allow for clean monitoring of changes._
   
   ### For code changes:
   - [X] Have you ensured that the full suite of tests is executed via `mvn 
-Pcontrib-check clean install` at the root `nifi` folder?
   - [X] Have you written or updated unit tests to verify your changes?
   - [ ] Have you verified that the full build is successful on JDK 8?
   - [ ] Have you verified that the full build is successful on JDK 11?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
   - [ ] If applicable, have you updated the `LICENSE` file, including the main 
`LICENSE` file under `nifi-assembly`?
   - [ ] If applicable, have you updated the `NOTICE` file, including the main 
`NOTICE` file found under `nifi-assembly`?
   - [ ] If adding new Properties, have you added `.displayName` in addition to 
.name (programmatic access) for each of the new properties?
   
   ### For documentation related changes:
   - [ ] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI for 
build issues and submit an update to your PR as soon as possible.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] arpadboda commented on pull request #792: MINIFICPP-1230 - Enable on Win and refactor MergeFileTests

2020-05-26 Thread GitBox


arpadboda commented on pull request #792:
URL: https://github.com/apache/nifi-minifi-cpp/pull/792#issuecomment-634121541


   LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] tpalfy commented on a change in pull request #4223: NIFI-7369 Adding big decimal support for record handling in order to avoid missing precision when reading in records

2020-05-26 Thread GitBox


tpalfy commented on a change in pull request #4223:
URL: https://github.com/apache/nifi/pull/4223#discussion_r430518456



##
File path: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/util/DataTypeUtils.java
##
@@ -1232,6 +1242,10 @@ public static boolean isBigIntTypeCompatible(final 
Object value) {
 return isNumberTypeCompatible(value, DataTypeUtils::isIntegral);
 }
 
+public static boolean isBigDecimalTypeCompatible(final Object value) {
+return isNumberTypeCompatible(value, DataTypeUtils::isFloatingPoint);

Review comment:
   This could be an unarmed landmine.
   The `DataTypeUtils::isFloatingPoint` has a `Float.parse()` with the comment 
above it: `// Just to ensure that the exponents are in range, etc.`
   It ensures _no_ such thing in fact (if the number is out of range the parse 
still succeeds and returns infinitiy), but this might get fixed later ("arming" 
the landmine), after which for `BigDecimal` it might start to cause issues.

##
File path: 
nifi-commons/nifi-record/src/main/java/org/apache/nifi/serialization/record/util/DataTypeUtils.java
##
@@ -34,6 +34,7 @@
 import java.io.InputStream;
 import java.io.Reader;
 import java.lang.reflect.Array;
+import java.math.BigDecimal;

Review comment:
   I think `public static Optional getWiderType(final DataType 
thisDataType, final DataType otherDataType)` should be updated as well.
   
   Could add along a new test in `TestFieldTypeInference`:
   ```java
   @Test
   public void test() {
   // GIVEN
   List dataTypes = Arrays.asList(
   RecordFieldType.DECIMAL.getDecimalDataType(10, 1),
   RecordFieldType.DECIMAL.getDecimalDataType(10, 3),
   RecordFieldType.DECIMAL.getDecimalDataType(7, 3),
   RecordFieldType.DECIMAL.getDecimalDataType(7, 5),
   RecordFieldType.DECIMAL.getDecimalDataType(7, 7),
   RecordFieldType.FLOAT.getDataType(),
   RecordFieldType.DOUBLE.getDataType()
   );
   
   DataType expected = RecordFieldType.DECIMAL.getDecimalDataType(10, 
7);
   
   // WHEN
   // THEN
   runWithAllPermutations(this::testToDataTypeShouldReturnSingleType, 
dataTypes, expected);
   }
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a change in pull request #794: MINIFICPP-1232 - PublishKafka processor doesn't validate some properties

2020-05-26 Thread GitBox


adamdebreceni commented on a change in pull request #794:
URL: https://github.com/apache/nifi-minifi-cpp/pull/794#discussion_r430189617



##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -82,10 +82,33 @@ core::Property PublishKafka::TargetBatchPayloadSize(
 core::PropertyBuilder::createProperty("Target Batch Payload 
Size")->withDescription("The target total payload size for a batch. 0 B means 
unlimited (Batch Size is still applied).")
 ->isRequired(false)->withDefaultValue("512 
KB")->build());
 core::Property PublishKafka::AttributeNameRegex("Attributes to Send as 
Headers", "Any attribute whose name matches the regex will be added to the 
Kafka messages as a Header", "");
-core::Property PublishKafka::QueueBufferMaxTime("Queue Buffering Max Time", 
"Delay to wait for messages in the producer queue to accumulate before 
constructing message batches", "");
-core::Property PublishKafka::QueueBufferMaxSize("Queue Max Buffer Size", 
"Maximum total message size sum allowed on the producer queue", "");
-core::Property PublishKafka::QueueBufferMaxMessage("Queue Max Message", 
"Maximum number of messages allowed on the producer queue", "");
-core::Property PublishKafka::CompressCodec("Compress Codec", "compression 
codec to use for compressing message sets", COMPRESSION_CODEC_NONE);
+
+const core::Property PublishKafka::QueueBufferMaxTime(

Review comment:
   to nitpick: with method chaining I don't know what the style guideline 
is, but I prefer the operator on the next line (and it seems that other 
properties do it like this, in this file)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi-minifi-cpp] adam-markovics commented on a change in pull request #794: MINIFICPP-1232 - PublishKafka processor doesn't validate some properties

2020-05-26 Thread GitBox


adam-markovics commented on a change in pull request #794:
URL: https://github.com/apache/nifi-minifi-cpp/pull/794#discussion_r430429029



##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -82,10 +82,33 @@ core::Property PublishKafka::TargetBatchPayloadSize(
 core::PropertyBuilder::createProperty("Target Batch Payload 
Size")->withDescription("The target total payload size for a batch. 0 B means 
unlimited (Batch Size is still applied).")
 ->isRequired(false)->withDefaultValue("512 
KB")->build());
 core::Property PublishKafka::AttributeNameRegex("Attributes to Send as 
Headers", "Any attribute whose name matches the regex will be added to the 
Kafka messages as a Header", "");
-core::Property PublishKafka::QueueBufferMaxTime("Queue Buffering Max Time", 
"Delay to wait for messages in the producer queue to accumulate before 
constructing message batches", "");
-core::Property PublishKafka::QueueBufferMaxSize("Queue Max Buffer Size", 
"Maximum total message size sum allowed on the producer queue", "");
-core::Property PublishKafka::QueueBufferMaxMessage("Queue Max Message", 
"Maximum number of messages allowed on the producer queue", "");
-core::Property PublishKafka::CompressCodec("Compress Codec", "compression 
codec to use for compressing message sets", COMPRESSION_CODEC_NONE);
+
+const core::Property PublishKafka::QueueBufferMaxTime(

Review comment:
   Done in new commit.

##
File path: extensions/librdkafka/PublishKafka.h
##
@@ -89,10 +89,10 @@ class PublishKafka : public core::Processor {
   static core::Property BatchSize;
   static core::Property TargetBatchPayloadSize;
   static core::Property AttributeNameRegex;
-  static core::Property QueueBufferMaxTime;
-  static core::Property QueueBufferMaxSize;
-  static core::Property QueueBufferMaxMessage;
-  static core::Property CompressCodec;
+  static const core::Property QueueBufferMaxTime;
+  static const core::Property QueueBufferMaxSize;
+  static const core::Property QueueBufferMaxMessage;
+  static const core::Property CompressCodec;

Review comment:
   Done in new commit.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [nifi] turcsanyip commented on a change in pull request #4276: NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT

2020-05-26 Thread GitBox


turcsanyip commented on a change in pull request #4276:
URL: https://github.com/apache/nifi/pull/4276#discussion_r430072990



##
File path: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKuduProcessor.java
##
@@ -120,40 +125,71 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
-protected KuduClient kuduClient;
+protected volatile KuduClient kuduClient;

Review comment:
   The `kuduClient` should be private in order to ensure it won't be 
modified without locking.

##
File path: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKuduProcessor.java
##
@@ -120,40 +125,71 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
-protected KuduClient kuduClient;
+protected volatile KuduClient kuduClient;
+private final ReadWriteLock kuduClientReadWriteLock = new 
ReentrantReadWriteLock();
+private final Lock kuduClientReadLock = kuduClientReadWriteLock.readLock();
+private final Lock kuduClientWriteLock = 
kuduClientReadWriteLock.writeLock();
 
 private volatile KerberosUser kerberosUser;
 
+protected abstract void onTrigger(ProcessContext context, ProcessSession 
session, KuduClient kuduClient) throws ProcessException;
+
+@Override
+public void onTrigger(final ProcessContext context, final ProcessSession 
session) throws ProcessException {
+kuduClientReadLock.lock();
+try {
+onTrigger(context, session, kuduClient);
+} finally {
+kuduClientReadLock.unlock();
+}
+}
+
 public KerberosUser getKerberosUser() {

Review comment:
   It could be protected.

##
File path: 
nifi-nar-bundles/nifi-kudu-bundle/nifi-kudu-processors/src/main/java/org/apache/nifi/processors/kudu/AbstractKuduProcessor.java
##
@@ -120,40 +125,71 @@
 
.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
 .build();
 
-protected KuduClient kuduClient;
+protected volatile KuduClient kuduClient;
+private final ReadWriteLock kuduClientReadWriteLock = new 
ReentrantReadWriteLock();
+private final Lock kuduClientReadLock = kuduClientReadWriteLock.readLock();
+private final Lock kuduClientWriteLock = 
kuduClientReadWriteLock.writeLock();
 
 private volatile KerberosUser kerberosUser;
 
+protected abstract void onTrigger(ProcessContext context, ProcessSession 
session, KuduClient kuduClient) throws ProcessException;
+
+@Override
+public void onTrigger(final ProcessContext context, final ProcessSession 
session) throws ProcessException {
+kuduClientReadLock.lock();
+try {
+onTrigger(context, session, kuduClient);
+} finally {
+kuduClientReadLock.unlock();
+}
+}
+
 public KerberosUser getKerberosUser() {
 return this.kerberosUser;
 }
 
-public KuduClient getKuduClient() {
-return this.kuduClient;
+public void createKerberosUserAndKuduClient(ProcessContext context) throws 
LoginException {
+createKerberosUser(context);
+createKuduClient(context);
 }
 
-public void createKuduClient(ProcessContext context) throws LoginException 
{
-final String kuduMasters = 
context.getProperty(KUDU_MASTERS).evaluateAttributeExpressions().getValue();
+public void createKerberosUser(ProcessContext context) throws 
LoginException {
 final KerberosCredentialsService credentialsService = 
context.getProperty(KERBEROS_CREDENTIALS_SERVICE).asControllerService(KerberosCredentialsService.class);
 final String kerberosPrincipal = 
context.getProperty(KERBEROS_PRINCIPAL).evaluateAttributeExpressions().getValue();
 final String kerberosPassword = 
context.getProperty(KERBEROS_PASSWORD).getValue();
 
 if (credentialsService != null) {
-kerberosUser = 
loginKerberosKeytabUser(credentialsService.getPrincipal(), 
credentialsService.getKeytab());
+kerberosUser = 
loginKerberosKeytabUser(credentialsService.getPrincipal(), 
credentialsService.getKeytab(), context);
 } else if (!StringUtils.isBlank(kerberosPrincipal) && 
!StringUtils.isBlank(kerberosPassword)) {
-kerberosUser = loginKerberosPasswordUser(kerberosPrincipal, 
kerberosPassword);
+kerberosUser = loginKerberosPasswordUser(kerberosPrincipal, 
kerberosPassword, context);
 }
+}
 
-if (kerberosUser != null) {
-final KerberosAction kerberosAction = new 
KerberosAction<>(kerberosUser, () -> buildClient(kuduMasters, context), 
getLogger());
-this.kuduClient = kerberosAction.execute();
-} else {
-this.kuduClient = buildClient(kuduMasters, context);
+public void createKuduClient(Proce

[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #794: MINIFICPP-1232 - PublishKafka processor doesn't validate some properties

2020-05-26 Thread GitBox


arpadboda commented on a change in pull request #794:
URL: https://github.com/apache/nifi-minifi-cpp/pull/794#discussion_r430355694



##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -82,10 +82,33 @@ core::Property PublishKafka::TargetBatchPayloadSize(
 core::PropertyBuilder::createProperty("Target Batch Payload 
Size")->withDescription("The target total payload size for a batch. 0 B means 
unlimited (Batch Size is still applied).")
 ->isRequired(false)->withDefaultValue("512 
KB")->build());
 core::Property PublishKafka::AttributeNameRegex("Attributes to Send as 
Headers", "Any attribute whose name matches the regex will be added to the 
Kafka messages as a Header", "");
-core::Property PublishKafka::QueueBufferMaxTime("Queue Buffering Max Time", 
"Delay to wait for messages in the producer queue to accumulate before 
constructing message batches", "");
-core::Property PublishKafka::QueueBufferMaxSize("Queue Max Buffer Size", 
"Maximum total message size sum allowed on the producer queue", "");
-core::Property PublishKafka::QueueBufferMaxMessage("Queue Max Message", 
"Maximum number of messages allowed on the producer queue", "");
-core::Property PublishKafka::CompressCodec("Compress Codec", "compression 
codec to use for compressing message sets", COMPRESSION_CODEC_NONE);
+
+const core::Property PublishKafka::QueueBufferMaxTime(

Review comment:
   +1.
   
   That's the way linter accepts it as well, so please change as @adamdebreceni 
suggest!

##
File path: extensions/librdkafka/PublishKafka.h
##
@@ -89,10 +89,10 @@ class PublishKafka : public core::Processor {
   static core::Property BatchSize;
   static core::Property TargetBatchPayloadSize;
   static core::Property AttributeNameRegex;
-  static core::Property QueueBufferMaxTime;
-  static core::Property QueueBufferMaxSize;
-  static core::Property QueueBufferMaxMessage;
-  static core::Property CompressCodec;
+  static const core::Property QueueBufferMaxTime;
+  static const core::Property QueueBufferMaxSize;
+  static const core::Property QueueBufferMaxMessage;
+  static const core::Property CompressCodec;

Review comment:
   Either all or none pls!

##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -82,10 +82,33 @@ core::Property PublishKafka::TargetBatchPayloadSize(
 core::PropertyBuilder::createProperty("Target Batch Payload 
Size")->withDescription("The target total payload size for a batch. 0 B means 
unlimited (Batch Size is still applied).")
 ->isRequired(false)->withDefaultValue("512 
KB")->build());
 core::Property PublishKafka::AttributeNameRegex("Attributes to Send as 
Headers", "Any attribute whose name matches the regex will be added to the 
Kafka messages as a Header", "");
-core::Property PublishKafka::QueueBufferMaxTime("Queue Buffering Max Time", 
"Delay to wait for messages in the producer queue to accumulate before 
constructing message batches", "");
-core::Property PublishKafka::QueueBufferMaxSize("Queue Max Buffer Size", 
"Maximum total message size sum allowed on the producer queue", "");
-core::Property PublishKafka::QueueBufferMaxMessage("Queue Max Message", 
"Maximum number of messages allowed on the producer queue", "");
-core::Property PublishKafka::CompressCodec("Compress Codec", "compression 
codec to use for compressing message sets", COMPRESSION_CODEC_NONE);
+
+const core::Property PublishKafka::QueueBufferMaxTime(

Review comment:
   +1.
   
   That's the way linter accepts it as well, so please change as @adamdebreceni 
suggest!
   
   Unfortunately this extension is not linted. Yet. :(

##
File path: extensions/librdkafka/PublishKafka.cpp
##
@@ -82,10 +82,33 @@ core::Property PublishKafka::TargetBatchPayloadSize(
 core::PropertyBuilder::createProperty("Target Batch Payload 
Size")->withDescription("The target total payload size for a batch. 0 B means 
unlimited (Batch Size is still applied).")
 ->isRequired(false)->withDefaultValue("512 
KB")->build());
 core::Property PublishKafka::AttributeNameRegex("Attributes to Send as 
Headers", "Any attribute whose name matches the regex will be added to the 
Kafka messages as a Header", "");
-core::Property PublishKafka::QueueBufferMaxTime("Queue Buffering Max Time", 
"Delay to wait for messages in the producer queue to accumulate before 
constructing message batches", "");
-core::Property PublishKafka::QueueBufferMaxSize("Queue Max Buffer Size", 
"Maximum total message size sum allowed on the producer queue", "");
-core::Property PublishKafka::QueueBufferMaxMessage("Queue Max Message", 
"Maximum number of messages allowed on the producer queue", "");
-core::Property PublishKafka::CompressCodec("Compress Codec", "compression 
codec to use for compressing message sets", COMPRESSION_CODEC_NONE);
+
+const core::Property PublishKafka::QueueBufferMaxTime(

Review comment:
   Not sure if this still applies after your linter changes, but previ

[GitHub] [nifi-minifi-cpp] arpadboda commented on a change in pull request #792: MINIFICPP-1230 - Enable on Win and refactor MergeFileTests

2020-05-26 Thread GitBox


arpadboda commented on a change in pull request #792:
URL: https://github.com/apache/nifi-minifi-cpp/pull/792#discussion_r430528786



##
File path: libminifi/test/archive-tests/MergeFileTests.cpp
##
@@ -275,785 +312,451 @@ TEST_CASE("MergeFileDefragmentDelimiter", 
"[mergefiletest2]") {
 expectfileSecond << "demarcator";
   std::ofstream tmpfile;
   std::string flowFileName = std::string(FLOW_FILE) + "." + 
std::to_string(i) + ".txt";
-  tmpfile.open(flowFileName.c_str());
+  tmpfile.open(flowFileName.c_str(), std::ios::binary);
   for (int j = 0; j < 32; j++) {
 tmpfile << std::to_string(i);
 if (i < 3)
   expectfileFirst << std::to_string(i);
 else
   expectfileSecond << std::to_string(i);
   }
-  tmpfile.close();
 }
 expectfileFirst << "footer";
 expectfileSecond << "footer";
-expectfileFirst.close();
-expectfileSecond.close();
-
-TestController testController;
-
LogTestController::getInstance().setTrace();
-
LogTestController::getInstance().setTrace();
-LogTestController::getInstance().setTrace();
-
LogTestController::getInstance().setTrace();
-
LogTestController::getInstance().setTrace();
-
LogTestController::getInstance().setTrace();
-
LogTestController::getInstance().setTrace();
-
LogTestController::getInstance().setTrace();
-
LogTestController::getInstance().setTrace();
-
-std::shared_ptr repo = std::make_shared();
-
-std::shared_ptr processor = 
std::make_shared("mergecontent");
-std::shared_ptr logAttributeProcessor = 
std::make_shared("logattribute");
-processor->initialize();
-utils::Identifier processoruuid;
-REQUIRE(true == processor->getUUID(processoruuid));
-utils::Identifier logAttributeuuid;
-REQUIRE(true == logAttributeProcessor->getUUID(logAttributeuuid));
-
-std::shared_ptr content_repo = 
std::make_shared();
-
content_repo->initialize(std::make_shared());
-// connection from merge processor to log attribute
-std::shared_ptr connection = 
std::make_shared(repo, content_repo, 
"logattributeconnection");
-connection->addRelationship(core::Relationship("merged", "Merge successful 
output"));
-connection->setSource(processor);
-connection->setDestination(logAttributeProcessor);
-connection->setSourceUUID(processoruuid);
-connection->setDestinationUUID(logAttributeuuid);
-processor->addConnection(connection);
-// connection to merge processor
-std::shared_ptr mergeconnection = 
std::make_shared(repo, content_repo, "mergeconnection");
-mergeconnection->setDestination(processor);
-mergeconnection->setDestinationUUID(processoruuid);
-processor->addConnection(mergeconnection);
-
-std::set autoTerminatedRelationships;
-core::Relationship original("original", "");
-core::Relationship failure("failure", "");
-autoTerminatedRelationships.insert(original);
-autoTerminatedRelationships.insert(failure);
-processor->setAutoTerminatedRelationships(autoTerminatedRelationships);
-
-processor->incrementActiveTasks();
-processor->setScheduledState(core::ScheduledState::RUNNING);
-logAttributeProcessor->incrementActiveTasks();
-logAttributeProcessor->setScheduledState(core::ScheduledState::RUNNING);
+  }
 
-std::shared_ptr node = 
std::make_shared(processor);
-std::shared_ptr 
controller_services_provider = nullptr;
-auto context = std::make_shared(node, 
controller_services_provider, repo, repo, content_repo);
-
context->setProperty(org::apache::nifi::minifi::processors::MergeContent::MergeFormat,
 MERGE_FORMAT_CONCAT_VALUE);
-
context->setProperty(org::apache::nifi::minifi::processors::MergeContent::MergeStrategy,
 MERGE_STRATEGY_DEFRAGMENT);
-
context->setProperty(org::apache::nifi::minifi::processors::MergeContent::DelimiterStratgey,
 DELIMITER_STRATEGY_FILENAME);
-
context->setProperty(org::apache::nifi::minifi::processors::MergeContent::Header,
 "/tmp/minifi-mergecontent.header");
-
context->setProperty(org::apache::nifi::minifi::processors::MergeContent::Footer,
 "/tmp/minifi-mergecontent.footer");
-
context->setProperty(org::apache::nifi::minifi::processors::MergeContent::Demarcator,
 "/tmp/minifi-mergecontent.demarcator");
-
-core::ProcessSession sessionGenFlowFile(context);
-std::shared_ptr record[6];
-
-// Generate 6 flowfiles, first threes merged to one, second thress merged 
to one
-std::shared_ptr income = 
node->getNextIncomingConnection();
-std::shared_ptr income_connection = 
std::static_pointer_cast(income);
-for (int i = 0; i < 6; i++) {
-  std::shared_ptr flow = std::static_pointer_cast < 
core::FlowFile > (sessionGenFlowFile.create());
-  std::string flowFileName = std::string(FLOW_FILE) + "." + 
std::to_string(i) + ".txt";
-  sessionGenFlowFile.import(flowFileName, flow, true, 0);
-  // three bundle
-  if (i < 3)
-flow

[GitHub] [nifi] asfgit closed pull request #4295: NIFI-7485 Updated commons-configuration2.

2020-05-26 Thread GitBox


asfgit closed pull request #4295:
URL: https://github.com/apache/nifi/pull/4295


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (NIFI-7485) Update dependency

2020-05-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17117046#comment-17117046
 ] 

ASF subversion and git services commented on NIFI-7485:
---

Commit aa804cfcebea23f1316b0c5ad5a6140bec57de01 in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=aa804cf ]

NIFI-7485 Updated commons-configuration2.
NIFI-7485 Found more instances that needed updating.

This closes #4295


> Update dependency
> -
>
> Key: NIFI-7485
> URL: https://issues.apache.org/jira/browse/NIFI-7485
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We need to update nifi-security-utils to use newer commons components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7485) Update dependency

2020-05-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17117047#comment-17117047
 ] 

ASF subversion and git services commented on NIFI-7485:
---

Commit aa804cfcebea23f1316b0c5ad5a6140bec57de01 in nifi's branch 
refs/heads/master from Mike Thomsen
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=aa804cf ]

NIFI-7485 Updated commons-configuration2.
NIFI-7485 Found more instances that needed updating.

This closes #4295


> Update dependency
> -
>
> Key: NIFI-7485
> URL: https://issues.apache.org/jira/browse/NIFI-7485
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Mike Thomsen
>Assignee: Mike Thomsen
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We need to update nifi-security-utils to use newer commons components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6526) GetAzureEventHub connection to service bus fails, anonymous cipher suites are within the supported list

2020-05-26 Thread Marc Parisi (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17116877#comment-17116877
 ] 

Marc Parisi commented on NIFI-6526:
---

[~jfrazee] Thanks for the note. I did test but forgot to comment. Thanks!

> GetAzureEventHub connection to service bus fails,  anonymous cipher suites 
> are within the supported list
> 
>
> Key: NIFI-6526
> URL: https://issues.apache.org/jira/browse/NIFI-6526
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.9.2
>Reporter: Sunile Manjee
>Priority: Major
> Fix For: 1.10.0
>
>
> There has been a SDK update, issue described here: 
> [https://github.com/Azure/azure-service-bus-java/issues/332]
> Connection to service bus fails if using version of JRE higher than 
> 1.8.0_191.  
>  
> Log file attached



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-6526) GetAzureEventHub connection to service bus fails, anonymous cipher suites are within the supported list

2020-05-26 Thread Joey Frazee (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Frazee resolved NIFI-6526.
---
Resolution: Fixed

> GetAzureEventHub connection to service bus fails,  anonymous cipher suites 
> are within the supported list
> 
>
> Key: NIFI-6526
> URL: https://issues.apache.org/jira/browse/NIFI-6526
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.9.2
>Reporter: Sunile Manjee
>Priority: Major
> Fix For: 1.10.0
>
>
> There has been a SDK update, issue described here: 
> [https://github.com/Azure/azure-service-bus-java/issues/332]
> Connection to service bus fails if using version of JRE higher than 
> 1.8.0_191.  
>  
> Log file attached



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6526) GetAzureEventHub connection to service bus fails, anonymous cipher suites are within the supported list

2020-05-26 Thread Joey Frazee (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Frazee updated NIFI-6526:
--
Fix Version/s: 1.10.0

> GetAzureEventHub connection to service bus fails,  anonymous cipher suites 
> are within the supported list
> 
>
> Key: NIFI-6526
> URL: https://issues.apache.org/jira/browse/NIFI-6526
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.9.2
>Reporter: Sunile Manjee
>Priority: Major
> Fix For: 1.10.0
>
>
> There has been a SDK update, issue described here: 
> [https://github.com/Azure/azure-service-bus-java/issues/332]
> Connection to service bus fails if using version of JRE higher than 
> 1.8.0_191.  
>  
> Log file attached



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6526) GetAzureEventHub connection to service bus fails, anonymous cipher suites are within the supported list

2020-05-26 Thread Joey Frazee (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17116874#comment-17116874
 ] 

Joey Frazee commented on NIFI-6526:
---

[~phrocker] I'm not sure if you got around to testing this but it is indeed 
fixed in 1.10 and above. If you follow the chain of issues there was an issue 
in proton-j that was resolved in 0.31, so when azure-eventhubs got bumped up to 
2.3.2 from 0.14.4, we got the new version of proton-j.

I validated it against 1.8.0, 1.9.2 for the error and 1.10.0 and 1.11.4 for the 
fix. Closing this out.

cc [~sunileman...@gmail.com] 

> GetAzureEventHub connection to service bus fails,  anonymous cipher suites 
> are within the supported list
> 
>
> Key: NIFI-6526
> URL: https://issues.apache.org/jira/browse/NIFI-6526
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0
>Reporter: Sunile Manjee
>Priority: Major
>
> There has been a SDK update, issue described here: 
> [https://github.com/Azure/azure-service-bus-java/issues/332]
> Connection to service bus fails if using version of JRE higher than 
> 1.8.0_191.  
>  
> Log file attached



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6526) GetAzureEventHub connection to service bus fails, anonymous cipher suites are within the supported list

2020-05-26 Thread Joey Frazee (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joey Frazee updated NIFI-6526:
--
Affects Version/s: 1.9.0
   1.9.1
   1.9.2

> GetAzureEventHub connection to service bus fails,  anonymous cipher suites 
> are within the supported list
> 
>
> Key: NIFI-6526
> URL: https://issues.apache.org/jira/browse/NIFI-6526
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.9.2
>Reporter: Sunile Manjee
>Priority: Major
>
> There has been a SDK update, issue described here: 
> [https://github.com/Azure/azure-service-bus-java/issues/332]
> Connection to service bus fails if using version of JRE higher than 
> 1.8.0_191.  
>  
> Log file attached



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7453) PutKudu kerberos issue after TGT expires

2020-05-26 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi updated NIFI-7453:
--
Component/s: Extensions

> PutKudu kerberos issue after TGT expires 
> -
>
> Key: NIFI-7453
> URL: https://issues.apache.org/jira/browse/NIFI-7453
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When PutKudu is used with kerberos authentication, it stops working when the 
> TGT expires with the following logs/exceptions:
> {noformat}
> ERROR org.apache.nifi.processors.kudu.PutKudu: 
> PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row 
> error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, 
> server=null, status=Runtime error: cannot re-acquire authentication token 
> after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions 
> received: [org.apache.kudu.client.NonRecoverableException: server requires 
> authentication, but client does not have Kerberos credentials (tgt). 
> Authentication tokens were not used because this connection will be used to 
> acquire a new token and therefore requires primary credentials])
> 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable 
> to connect to master HOST:PORT: server requires authentication, but client 
> does not have Kerberos credentials (tgt). Authentication tokens were not used 
> because this connection will be used to acquire a new token and therefore 
> requires primary credentials
> 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: 
> unexpected tablet lookup failure for operation KuduRpc(method=Write, 
> tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces)
> org.apache.kudu.client.NonRecoverableException: cannot re-acquire 
> authentication token after 5 attempts (Couldn't find a valid master in 
> (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover
> ableException: server requires authentication, but client does not have 
> Kerberos credentials (tgt). Authentication tokens were not used because this 
> connection will be used to acquire a new token and therefore requires primary 
> credentials])
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158)
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141)
> at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280)
> at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259)
> at com.stumbleupon.async.Deferred.callback(Deferred.java:1002)
> at 
> org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-7453) PutKudu kerberos issue after TGT expires

2020-05-26 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi reassigned NIFI-7453:
-

Assignee: Tamas Palfy

> PutKudu kerberos issue after TGT expires 
> -
>
> Key: NIFI-7453
> URL: https://issues.apache.org/jira/browse/NIFI-7453
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When PutKudu is used with kerberos authentication, it stops working when the 
> TGT expires with the following logs/exceptions:
> {noformat}
> ERROR org.apache.nifi.processors.kudu.PutKudu: 
> PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row 
> error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, 
> server=null, status=Runtime error: cannot re-acquire authentication token 
> after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions 
> received: [org.apache.kudu.client.NonRecoverableException: server requires 
> authentication, but client does not have Kerberos credentials (tgt). 
> Authentication tokens were not used because this connection will be used to 
> acquire a new token and therefore requires primary credentials])
> 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable 
> to connect to master HOST:PORT: server requires authentication, but client 
> does not have Kerberos credentials (tgt). Authentication tokens were not used 
> because this connection will be used to acquire a new token and therefore 
> requires primary credentials
> 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: 
> unexpected tablet lookup failure for operation KuduRpc(method=Write, 
> tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces)
> org.apache.kudu.client.NonRecoverableException: cannot re-acquire 
> authentication token after 5 attempts (Couldn't find a valid master in 
> (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover
> ableException: server requires authentication, but client does not have 
> Kerberos credentials (tgt). Authentication tokens were not used because this 
> connection will be used to acquire a new token and therefore requires primary 
> credentials])
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158)
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141)
> at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280)
> at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259)
> at com.stumbleupon.async.Deferred.callback(Deferred.java:1002)
> at 
> org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7453) PutKudu kerberos issue after TGT expires

2020-05-26 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi resolved NIFI-7453.
---
Fix Version/s: 1.12.0
   Resolution: Fixed

> PutKudu kerberos issue after TGT expires 
> -
>
> Key: NIFI-7453
> URL: https://issues.apache.org/jira/browse/NIFI-7453
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
> Fix For: 1.12.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When PutKudu is used with kerberos authentication, it stops working when the 
> TGT expires with the following logs/exceptions:
> {noformat}
> ERROR org.apache.nifi.processors.kudu.PutKudu: 
> PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row 
> error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, 
> server=null, status=Runtime error: cannot re-acquire authentication token 
> after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions 
> received: [org.apache.kudu.client.NonRecoverableException: server requires 
> authentication, but client does not have Kerberos credentials (tgt). 
> Authentication tokens were not used because this connection will be used to 
> acquire a new token and therefore requires primary credentials])
> 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable 
> to connect to master HOST:PORT: server requires authentication, but client 
> does not have Kerberos credentials (tgt). Authentication tokens were not used 
> because this connection will be used to acquire a new token and therefore 
> requires primary credentials
> 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: 
> unexpected tablet lookup failure for operation KuduRpc(method=Write, 
> tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces)
> org.apache.kudu.client.NonRecoverableException: cannot re-acquire 
> authentication token after 5 attempts (Couldn't find a valid master in 
> (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover
> ableException: server requires authentication, but client does not have 
> Kerberos credentials (tgt). Authentication tokens were not used because this 
> connection will be used to acquire a new token and therefore requires primary 
> credentials])
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158)
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141)
> at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280)
> at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259)
> at com.stumbleupon.async.Deferred.callback(Deferred.java:1002)
> at 
> org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7453) PutKudu kerberos issue after TGT expires

2020-05-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17116856#comment-17116856
 ] 

ASF subversion and git services commented on NIFI-7453:
---

Commit ca65bba5d720550aab97fcfc58be46e1b77001d3 in nifi's branch 
refs/heads/master from Tamas Palfy
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ca65bba ]

NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT

NIFI-7453 Creating a new Kudu client when refreshing TGT in 
KerberosPasswordUser as well. (Applied to KerberosKeytabUser only before.)
NIFI-7453 Safely closing old Kudu client before creating a new one.
NIFI-7453 Visibility adjustment.

This closes #4276.

Signed-off-by: Peter Turcsanyi 


> PutKudu kerberos issue after TGT expires 
> -
>
> Key: NIFI-7453
> URL: https://issues.apache.org/jira/browse/NIFI-7453
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When PutKudu is used with kerberos authentication, it stops working when the 
> TGT expires with the following logs/exceptions:
> {noformat}
> ERROR org.apache.nifi.processors.kudu.PutKudu: 
> PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row 
> error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, 
> server=null, status=Runtime error: cannot re-acquire authentication token 
> after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions 
> received: [org.apache.kudu.client.NonRecoverableException: server requires 
> authentication, but client does not have Kerberos credentials (tgt). 
> Authentication tokens were not used because this connection will be used to 
> acquire a new token and therefore requires primary credentials])
> 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable 
> to connect to master HOST:PORT: server requires authentication, but client 
> does not have Kerberos credentials (tgt). Authentication tokens were not used 
> because this connection will be used to acquire a new token and therefore 
> requires primary credentials
> 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: 
> unexpected tablet lookup failure for operation KuduRpc(method=Write, 
> tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces)
> org.apache.kudu.client.NonRecoverableException: cannot re-acquire 
> authentication token after 5 attempts (Couldn't find a valid master in 
> (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover
> ableException: server requires authentication, but client does not have 
> Kerberos credentials (tgt). Authentication tokens were not used because this 
> connection will be used to acquire a new token and therefore requires primary 
> credentials])
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158)
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141)
> at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280)
> at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259)
> at com.stumbleupon.async.Deferred.callback(Deferred.java:1002)
> at 
> org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7453) PutKudu kerberos issue after TGT expires

2020-05-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17116857#comment-17116857
 ] 

ASF subversion and git services commented on NIFI-7453:
---

Commit ca65bba5d720550aab97fcfc58be46e1b77001d3 in nifi's branch 
refs/heads/master from Tamas Palfy
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ca65bba ]

NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT

NIFI-7453 Creating a new Kudu client when refreshing TGT in 
KerberosPasswordUser as well. (Applied to KerberosKeytabUser only before.)
NIFI-7453 Safely closing old Kudu client before creating a new one.
NIFI-7453 Visibility adjustment.

This closes #4276.

Signed-off-by: Peter Turcsanyi 


> PutKudu kerberos issue after TGT expires 
> -
>
> Key: NIFI-7453
> URL: https://issues.apache.org/jira/browse/NIFI-7453
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When PutKudu is used with kerberos authentication, it stops working when the 
> TGT expires with the following logs/exceptions:
> {noformat}
> ERROR org.apache.nifi.processors.kudu.PutKudu: 
> PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row 
> error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, 
> server=null, status=Runtime error: cannot re-acquire authentication token 
> after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions 
> received: [org.apache.kudu.client.NonRecoverableException: server requires 
> authentication, but client does not have Kerberos credentials (tgt). 
> Authentication tokens were not used because this connection will be used to 
> acquire a new token and therefore requires primary credentials])
> 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable 
> to connect to master HOST:PORT: server requires authentication, but client 
> does not have Kerberos credentials (tgt). Authentication tokens were not used 
> because this connection will be used to acquire a new token and therefore 
> requires primary credentials
> 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: 
> unexpected tablet lookup failure for operation KuduRpc(method=Write, 
> tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces)
> org.apache.kudu.client.NonRecoverableException: cannot re-acquire 
> authentication token after 5 attempts (Couldn't find a valid master in 
> (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover
> ableException: server requires authentication, but client does not have 
> Kerberos credentials (tgt). Authentication tokens were not used because this 
> connection will be used to acquire a new token and therefore requires primary 
> credentials])
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158)
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141)
> at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280)
> at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259)
> at com.stumbleupon.async.Deferred.callback(Deferred.java:1002)
> at 
> org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7453) PutKudu kerberos issue after TGT expires

2020-05-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17116855#comment-17116855
 ] 

ASF subversion and git services commented on NIFI-7453:
---

Commit ca65bba5d720550aab97fcfc58be46e1b77001d3 in nifi's branch 
refs/heads/master from Tamas Palfy
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ca65bba ]

NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT

NIFI-7453 Creating a new Kudu client when refreshing TGT in 
KerberosPasswordUser as well. (Applied to KerberosKeytabUser only before.)
NIFI-7453 Safely closing old Kudu client before creating a new one.
NIFI-7453 Visibility adjustment.

This closes #4276.

Signed-off-by: Peter Turcsanyi 


> PutKudu kerberos issue after TGT expires 
> -
>
> Key: NIFI-7453
> URL: https://issues.apache.org/jira/browse/NIFI-7453
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When PutKudu is used with kerberos authentication, it stops working when the 
> TGT expires with the following logs/exceptions:
> {noformat}
> ERROR org.apache.nifi.processors.kudu.PutKudu: 
> PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row 
> error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, 
> server=null, status=Runtime error: cannot re-acquire authentication token 
> after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions 
> received: [org.apache.kudu.client.NonRecoverableException: server requires 
> authentication, but client does not have Kerberos credentials (tgt). 
> Authentication tokens were not used because this connection will be used to 
> acquire a new token and therefore requires primary credentials])
> 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable 
> to connect to master HOST:PORT: server requires authentication, but client 
> does not have Kerberos credentials (tgt). Authentication tokens were not used 
> because this connection will be used to acquire a new token and therefore 
> requires primary credentials
> 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: 
> unexpected tablet lookup failure for operation KuduRpc(method=Write, 
> tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces)
> org.apache.kudu.client.NonRecoverableException: cannot re-acquire 
> authentication token after 5 attempts (Couldn't find a valid master in 
> (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover
> ableException: server requires authentication, but client does not have 
> Kerberos credentials (tgt). Authentication tokens were not used because this 
> connection will be used to acquire a new token and therefore requires primary 
> credentials])
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158)
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141)
> at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280)
> at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259)
> at com.stumbleupon.async.Deferred.callback(Deferred.java:1002)
> at 
> org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7453) PutKudu kerberos issue after TGT expires

2020-05-26 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17116858#comment-17116858
 ] 

ASF subversion and git services commented on NIFI-7453:
---

Commit ca65bba5d720550aab97fcfc58be46e1b77001d3 in nifi's branch 
refs/heads/master from Tamas Palfy
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ca65bba ]

NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT

NIFI-7453 Creating a new Kudu client when refreshing TGT in 
KerberosPasswordUser as well. (Applied to KerberosKeytabUser only before.)
NIFI-7453 Safely closing old Kudu client before creating a new one.
NIFI-7453 Visibility adjustment.

This closes #4276.

Signed-off-by: Peter Turcsanyi 


> PutKudu kerberos issue after TGT expires 
> -
>
> Key: NIFI-7453
> URL: https://issues.apache.org/jira/browse/NIFI-7453
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Tamas Palfy
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When PutKudu is used with kerberos authentication, it stops working when the 
> TGT expires with the following logs/exceptions:
> {noformat}
> ERROR org.apache.nifi.processors.kudu.PutKudu: 
> PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row 
> error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, 
> server=null, status=Runtime error: cannot re-acquire authentication token 
> after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions 
> received: [org.apache.kudu.client.NonRecoverableException: server requires 
> authentication, but client does not have Kerberos credentials (tgt). 
> Authentication tokens were not used because this connection will be used to 
> acquire a new token and therefore requires primary credentials])
> 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable 
> to connect to master HOST:PORT: server requires authentication, but client 
> does not have Kerberos credentials (tgt). Authentication tokens were not used 
> because this connection will be used to acquire a new token and therefore 
> requires primary credentials
> 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: 
> unexpected tablet lookup failure for operation KuduRpc(method=Write, 
> tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces)
> org.apache.kudu.client.NonRecoverableException: cannot re-acquire 
> authentication token after 5 attempts (Couldn't find a valid master in 
> (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover
> ableException: server requires authentication, but client does not have 
> Kerberos credentials (tgt). Authentication tokens were not used because this 
> connection will be used to acquire a new token and therefore requires primary 
> credentials])
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158)
> at 
> org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141)
> at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280)
> at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259)
> at com.stumbleupon.async.Deferred.callback(Deferred.java:1002)
> at 
> org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7296) BST TimeZone parsing fails, breaking webgui and API

2020-05-26 Thread Jira


[ 
https://issues.apache.org/jira/browse/NIFI-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17116773#comment-17116773
 ] 

Tamás Bunth commented on NIFI-7296:
---

I could reproduce the bug using CentOS 7.8.2003, 
java-11-openjdk-11.0.7.10-4.el7_8.x86_64.

The problem is related to an openJDK bug[1], which is already fixed, and 
backported to OpenJDK 11.0.8.

The problem lies on having a new entry in CLDR which is "America/Nuuk". Nuuk is 
renamed from Godthab, but this change has to be addressed in OpenJDK as well 
(apparently, they link the existing data to this new entry).

Since it's not "our" bug, and the affected java versions are narrow, I would 
close this issue.

[1] https://bugs.openjdk.java.net/browse/JDK-8243541

> BST TimeZone parsing fails, breaking webgui and API
> ---
>
> Key: NIFI-7296
> URL: https://issues.apache.org/jira/browse/NIFI-7296
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.11.3
> Environment: Nifi 1.11.3 running on 
> jre-11-openjdk-11.0.4.11-1.el7_7.x86_64 and RHEL 8 
> cluster of 6 servers
>Reporter: Michael Percival
>Assignee: Tamás Bunth
>Priority: Blocker
>
> Since clocks have changed in the UK and we have moved to BST, API calls and 
> browsing to the web gui fails with a 'An unexpected error has occurred. 
> Please check the logs for additional details.' error. reviewing the 
> nifi-user.log shows the below when attempting to access the webgui, appears 
> the timezone is not being parsed properly by the web server, see below:
> Caused by: java.time.format.DateTimeParseException: Text '12:23:17 BST' could 
> not be parsed: null
>  at 
> java.base/java.time.format.DateTimeFormatter.createError(DateTimeFormatter.java:2017)
>  at 
> java.base/java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1952)
>  at java.base/java.time.LocalDateTime.parse(LocalDateTime.java:492)
>  at 
> org.apache.nifi.web.api.dto.util.TimeAdapter.unmarshal(TimeAdapter.java:55)
>  at 
> org.apache.nifi.web.api.dto.util.TimeAdapter.unmarshal(TimeAdapter.java:33)
>  at 
> com.fasterxml.jackson.module.jaxb.AdapterConverter.convert(AdapterConverter.java:35)
>  at 
> com.fasterxml.jackson.databind.deser.std.StdDelegatingDeserializer.convertValue(StdDelegatingDeserializ$
>  at 
> com.fasterxml.jackson.databind.deser.std.StdDelegatingDeserializer.deserialize(StdDelegatingDeserialize$
>  at 
> com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
>  at 
> com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
>  ... 122 common frames omitted



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (NIFIREG-393) VersionedComponent field name does not match getter/setter

2020-05-26 Thread Matthew Knight (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFIREG-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Knight closed NIFIREG-393.
--
Resolution: Workaround

> VersionedComponent field name does not match getter/setter
> --
>
> Key: NIFIREG-393
> URL: https://issues.apache.org/jira/browse/NIFIREG-393
> Project: NiFi Registry
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: Matthew Knight
>Priority: Trivial
>  Labels: easyfix, registry, stateless
>
> In org.apache.nifi.registry.flow.VersionedComponent the field name groupId 
> does not match the getter/setter getGroupIdentifier.
> This causes problems with exporting flows to json using nifi-registry (uses 
> jackson, which references getter/setters) and importing those flows into 
> nifi-stateless (uses gson, which looks at field names).
> Changing the field name from groupId to groupIdentifier should fix this 
> problem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (MINIFICPP-1239) Use STL regular expressions whenever we can

2020-05-26 Thread Ferenc Gerlits (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Gerlits resolved MINIFICPP-1239.
---
Resolution: Fixed

> Use STL regular expressions whenever we can
> ---
>
> Key: MINIFICPP-1239
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1239
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Ferenc Gerlits
>Assignee: Ferenc Gerlits
>Priority: Minor
> Fix For: 0.8.0
>
>
> Because gcc < 4.9 has only partial and buggy support for regular expressions 
> (see [https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53631]), we need to use 
> the more basic regex functions of regex.h when using those old compilers.  In 
> RegexUtils.h, this is done using the condition
> {code:c++}
> #if (__cplusplus > 201103L) || defined(_WIN32)
> // use STL regexes
> {code}
> However, this means that we only use the STL regexes on Windows, because on 
> Linux and MacOS we compile with {{-std=c++11}} (and not 14), so 
> {{__cplusplus}} is always set to {{201103}}.
> Change the condition to use STL regexes by default, and only fall back to 
> regex.h in the case of old gcc compilers, i.e.
> {code:c++}
> #if defined(__GNUC__) && (__GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 
> 9))
> // don't use STL regexes
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7479) Listening Port property on HandleHttpRequest doesn't work with parameters

2020-05-26 Thread David Malament (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Malament resolved NIFI-7479.
--
Resolution: Not A Problem

This was not actually an issue, it was user error.

> Listening Port property on HandleHttpRequest doesn't work with parameters
> -
>
> Key: NIFI-7479
> URL: https://issues.apache.org/jira/browse/NIFI-7479
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: David Malament
>Priority: Major
> Attachments: image-2020-05-22-10-29-01-827.png
>
>
> The Listening Port property on the HandleHttpRequest processor clearly 
> indicates that parameters are supported (see screenshot) and the processor 
> starts up successfully, but any requests to the configured port give a 
> "connection refused" error. Switching the property to a hard-coded value or a 
> variable instead of a parameter restores functionality.
> !image-2020-05-22-10-29-01-827.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7479) Listening Port property on HandleHttpRequest doesn't work with parameters

2020-05-26 Thread David Malament (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17116704#comment-17116704
 ] 

David Malament commented on NIFI-7479:
--

I'm an idiot, it's working fine. I must have missed setting the parameter 
context on the parent process group. 

> Listening Port property on HandleHttpRequest doesn't work with parameters
> -
>
> Key: NIFI-7479
> URL: https://issues.apache.org/jira/browse/NIFI-7479
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.4
>Reporter: David Malament
>Priority: Major
> Attachments: image-2020-05-22-10-29-01-827.png
>
>
> The Listening Port property on the HandleHttpRequest processor clearly 
> indicates that parameters are supported (see screenshot) and the processor 
> starts up successfully, but any requests to the configured port give a 
> "connection refused" error. Switching the property to a hard-coded value or a 
> variable instead of a parameter restores functionality.
> !image-2020-05-22-10-29-01-827.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (NIFI-4893) Cannot convert Avro schemas to Record schemas with default value in arrays

2020-05-26 Thread Gardella Juan Pablo (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo reassigned NIFI-4893:
-

Assignee: Gardella Juan Pablo

> Cannot convert Avro schemas to Record schemas with default value in arrays
> --
>
> Key: NIFI-4893
> URL: https://issues.apache.org/jira/browse/NIFI-4893
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.5.0
> Environment: ALL
>Reporter: Gardella Juan Pablo
>Assignee: Gardella Juan Pablo
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: issue1.zip
>
>
> Given an Avro Schema that has a default array defined, it is not possible to 
> be converted to a Nifi Record Schema.
> To reproduce the bug, try to convert the following Avro schema to Record 
> Schema:
> {code}
> {
>     "type": "record",
>     "name": "Foo1",
>     "namespace": "foo.namespace",
>     "fields": [
>         {
>             "name": "listOfInt",
>             "type": {
>                 "type": "array",
>                 "items": "int"
>             },
>             "doc": "array of ints",
>             "default": 0
>         }
>     ]
> }
> {code}
>  
> Using org.apache.nifi.avro.AvroTypeUtil class. Attached a maven project to 
> reproduce the issue and also the fix.
> * To reproduce the bug, run "mvn clean test"
> * To test the fix, run "mvn clean test -Ppatch".
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (MINIFICPP-1239) Use STL regular expressions whenever we can

2020-05-26 Thread Ferenc Gerlits (Jira)
Ferenc Gerlits created MINIFICPP-1239:
-

 Summary: Use STL regular expressions whenever we can
 Key: MINIFICPP-1239
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1239
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Ferenc Gerlits
Assignee: Ferenc Gerlits
 Fix For: 0.8.0


Because gcc < 4.9 has only partial and buggy support for regular expressions 
(see [https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53631]), we need to use the 
more basic regex functions of regex.h when using those old compilers.  In 
RegexUtils.h, this is done using the condition

{code:c++}
#if (__cplusplus > 201103L) || defined(_WIN32)
// use STL regexes
{code}

However, this means that we only use the STL regexes on Windows, because on 
Linux and MacOS we compile with {{-std=c++11}} (and not 14), so {{__cplusplus}} 
is always set to {{201103}}.

Change the condition to use STL regexes by default, and only fall back to 
regex.h in the case of old gcc compilers, i.e.

{code:c++}
#if defined(__GNUC__) && (__GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 9))
// don't use STL regexes
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7464) CSVRecordSetWriter does not output header for record sets with zero records

2020-05-26 Thread Alessandro D'Armiento (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17116547#comment-17116547
 ] 

Alessandro D'Armiento commented on NIFI-7464:
-

This could be made optional with an additional parameter in CSVRecordSetWriter

> CSVRecordSetWriter does not output header for record sets with zero records
> ---
>
> Key: NIFI-7464
> URL: https://issues.apache.org/jira/browse/NIFI-7464
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.11.3
>Reporter: Karl Fredrickson
>Priority: Major
>
> If you configure CSVRecordSetWriter to output a header row, and a processor 
> such as QueryRecord or ConvertRecord writes out a flowfile with zero records 
> using the CSVRecordSetWriter, the header row will not be included.
> This affects QueryRecord and ConvertRecord processors and presumably all 
> other processors that can be configured to use CSVRecordWriter.
> I suppose this could be intentional behavior but older versions of NiFi like 
> 1.3 do output a header even when writing a zero record flowfile, and this 
> caused some non-trivial issues for us in the process of upgrading from 1.3 to 
> 1.11.  We fixed this on our NiFi installation by making a small change to the 
> WriteCSVResult.java file and then rebuilding the NiFi record serialization 
> services NAR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)