[jira] [Assigned] (NIFI-10192) LookupRecord attempts first lookup multiple times

2022-09-27 Thread Ryan Miller (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Miller reassigned NIFI-10192:
--

Assignee: Ryan Miller

> LookupRecord attempts first lookup multiple times
> -
>
> Key: NIFI-10192
> URL: https://issues.apache.org/jira/browse/NIFI-10192
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.16.1, 1.16.2, 1.16.3
>Reporter: Chris Norton
>Assignee: Ryan Miller
>Priority: Minor
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> A change was implemented in 
> [NIFI-9903|https://issues.apache.org/jira/browse/NIFI-9903] which results in 
> *lookupService.lookup()* being called twice per record ([at least until the 
> first 
> match|https://github.com/apache/nifi/blob/rel/nifi-1.16.1/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/LookupRecord.java#L350]).
>  For lookup services which are idempotent (CSVRecordLookupService, 
> DistributedMapCacheLookupService, PropertiesFileLookupService) making lookups 
> twice won’t affect the result or have undesired side effects. However, the 
> RestLookupService can make arbitrary HTTP requests for the standard HTTP 
> methods (GET, POST, PUT, DELETE) and there is no guarantee that these 
> requests will be idempotent. POST requests in particular are not expected to 
> be idempotent and may cause undesirable behaviour if invoked multiple times 
> (as in our case).
> As the name suggests, LookupRecord could be expected to be used only to 
> perform lookups which are idempotent and do not have side effects. [Matt 
> Burgess wrote an 
> article|http://funnifi.blogspot.com/2018/08/database-sequence-lookup-with.html]
>  where it seems the expected behaviour was that *lookupService.lookup()* 
> would only be called once. The change in behaviour and being called twice 
> would now cause IDs to be skipped.
> It was suggested by Mark Payne in a Slack discussion that lookup results 
> could be cached up until the first match, which may alleviate the issues we 
> are seeing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-4890) OIDC Token Refresh is not done correctly

2022-09-27 Thread David Handermann (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-4890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Handermann reassigned NIFI-4890:
--

Assignee: David Handermann  (was: Raz Dobkies)

> OIDC Token Refresh is not done correctly
> 
>
> Key: NIFI-4890
> URL: https://issues.apache.org/jira/browse/NIFI-4890
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0
> Environment: Environment:
> Browser: Chrome / Firefox 
> Configuration of NiFi: 
> - SSL certificate for the server (no client auth) 
> - OIDC configuration including end_session_endpoint (see the link 
> https://auth.s.orchestracities.com/auth/realms/default/.well-known/openid-configuration)
>  
>Reporter: Federico Michele Facca
>Assignee: David Handermann
>Priority: Major
>
> It looks like the NIFI UI is not refreshing the OIDC token in background, and 
> because of that, when the token expires, tells you that your session is 
> expired. and you need to refresh the page, to get a new token.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6319: NIFI-10381 Refactor Azure Event Hubs components with current SDK

2022-09-27 Thread GitBox


exceptionfactory commented on code in PR #6319:
URL: https://github.com/apache/nifi/pull/6319#discussion_r981922359


##
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/eventhub/position/LegacyBlobStorageEventPositionProvider.java:
##
@@ -0,0 +1,154 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.azure.eventhub.position;
+
+import com.azure.core.util.BinaryData;
+import com.azure.messaging.eventhubs.models.EventPosition;
+import com.azure.storage.blob.BlobAsyncClient;
+import com.azure.storage.blob.BlobContainerAsyncClient;
+import com.azure.storage.blob.models.BlobItem;
+import com.azure.storage.blob.models.BlobListDetails;
+import com.azure.storage.blob.models.ListBlobsOptions;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Event Position Provider using Azure Blob Storage implemented in Azure Event 
Hubs SDK Version 3
+ */
+public class LegacyBlobStorageEventPositionProvider implements 
EventPositionProvider {
+private static final String LEASE_SEQUENCE_NUMBER_FIELD = "sequenceNumber";
+
+private static final Logger logger = 
LoggerFactory.getLogger(LegacyBlobStorageEventPositionProvider.class);
+
+private static final ObjectMapper objectMapper = new ObjectMapper();
+
+private final BlobContainerAsyncClient blobContainerAsyncClient;
+
+private final String consumerGroup;
+
+public LegacyBlobStorageEventPositionProvider(
+final BlobContainerAsyncClient blobContainerAsyncClient,
+final String consumerGroup
+) {
+this.blobContainerAsyncClient = 
Objects.requireNonNull(blobContainerAsyncClient, "Client required");
+this.consumerGroup = Objects.requireNonNull(consumerGroup, "Consumer 
Group required");
+}
+
+/**
+ * Get Initial Partition Event Position using Azure Blob Storage as 
persisted in
+ * 
com.microsoft.azure.eventprocessorhost.AzureStorageCheckpointLeaseManager
+ *
+ * @return Map of Partition and Event Position or empty when no 
checkpoints found
+ */
+@Override
+public Map getInitialPartitionEventPosition() {
+final Map partitionEventPosition;
+
+if (containerExists()) {
+final BlobListDetails blobListDetails = new 
BlobListDetails().setRetrieveMetadata(true);
+final ListBlobsOptions listBlobsOptions = new 
ListBlobsOptions().setPrefix(consumerGroup).setDetails(blobListDetails);
+final Iterable blobItems = 
blobContainerAsyncClient.listBlobs(listBlobsOptions).toIterable();
+partitionEventPosition = getPartitionEventPosition(blobItems);
+} else {
+partitionEventPosition = Collections.emptyMap();
+}
+
+return partitionEventPosition;
+}
+
+private Map getPartitionEventPosition(final 
Iterable blobItems) {
+final Map partitionEventPosition = new 
LinkedHashMap<>();
+
+for (final BlobItem blobItem : blobItems) {
+if (Boolean.TRUE.equals(blobItem.isPrefix())) {
+continue;
+}
+
+final String partitionId = getPartitionId(blobItem);
+final EventPosition eventPosition = getEventPosition(blobItem);
+if (eventPosition == null) {
+logger.info("Legacy Event Position not found for Partition 
[{}] Blob [{}]", partitionId, blobItem.getName());
+} else {
+partitionEventPosition.put(partitionId, eventPosition);
+}
+}
+
+return partitionEventPosition;
+}
+
+private String getPartitionId(final BlobItem blobItem) {
+final String blobItemName = blobItem.getName();
+final Path blobItemPath = Paths.get(blobItemName);
+final Path blobItemFileName = blobItemPath.getFileName();
+return 

[GitHub] [nifi] exceptionfactory commented on pull request #6319: NIFI-10381 Refactor Azure Event Hubs components with current SDK

2022-09-27 Thread GitBox


exceptionfactory commented on PR #6319:
URL: https://github.com/apache/nifi/pull/6319#issuecomment-1260354286

   > It may be related to this issue: 
[Azure/azure-sdk-for-java#29927](https://github.com/Azure/azure-sdk-for-java/issues/29927)
 However, we use `azure-messaging-eventhub-checkpointstore-blob:1.15.1` where 
this issue should be fixed.
   
   Thanks for noting the HTTP 409 and 412 messages @turcsanyip, I also noticed 
that during testing. According to the linked issue, it sounds like that is 
expected as part of partition load balancing, but it sounds like others have 
also found it confusing. Something to track for future reference in the Azure 
SDK and upgrade when a new version is available that changes the behavior.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] mattyb149 commented on a diff in pull request #6391: NIFI-9402: Adding DatabaseParameterProvider

2022-09-27 Thread GitBox


mattyb149 commented on code in PR #6391:
URL: https://github.com/apache/nifi/pull/6391#discussion_r981874236


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-parameter-providers/src/main/resources/docs/org.apache.nifi.parameter.DatabaseParameterProvider/additionalDetails.html:
##
@@ -0,0 +1,187 @@
+
+http://www.w3.org/1999/html;>
+
+
+
+
+DatabaseParameterProvider
+
+
+
+
+Providing Parameters from a Database
+
+
+The DatabaseParameterProvider at its core maps database rows to 
Parameters, specified by a
+Parameter Name Column and Parameter Value Column.  The Parameter Group 
name must also be accounted for, and may
+be specified in different ways using the Parameter Grouping Strategy.
+
+
+
+The default configuration uses a fully column-based approach, with the 
Parameter Group Name
+also specified by columns in the same table.  An example of a table using 
this configuration would be:
+
+
+
+
+PARAMETER_CONTEXTS
+
+
+
PARAMETER_NAMEPARAMETER_VALUEPARAMETER_GROUP
+
+
+
+
+param.foovalue-foogroup_1
+
+
+param.barvalue-bargroup_1
+
+
+param.onevalue-onegroup_2
+
+
+param.twovalue-twogroup_2
+
+
+Table 1: Database table example with Grouping Strategy = 
Column
+
+
+
+In order to use the data from this table, set the following Properties:
+
+
+
+Parameter Grouping Strategy - Column
+Table Name - PARAMETER_CONTEXTS
+Parameter Name Column - PARAMETER_NAME
+Parameter Value Column - PARAMETER_VALUE
+Parameter Group Name Column - PARAMETER_GROUP
+
+
+
+Note: in some databases, the words 'PARAMETER', 'PARAMETERS', 'GROUP', and 
even 'VALUE' are reserved words.

Review Comment:
   Maybe add to this to check the database docs and/or quote the words per the 
DB doc



##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-parameter-providers/src/main/java/org/apache/nifi/parameter/DatabaseParameterProvider.java:
##
@@ -0,0 +1,254 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.parameter;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.ConfigVerificationResult;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.dbcp.DBCPService;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.standard.db.DatabaseAdapter;
+import org.apache.nifi.util.StringUtils;
+
+import java.sql.Connection;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.ServiceLoader;
+import java.util.stream.Collectors;
+
+@Tags({"database", "dbcp", "sql"})
+@CapabilityDescription("Fetches parameters from database tables")
+
+public class DatabaseParameterProvider extends AbstractParameterProvider 
implements VerifiableParameterProvider {
+
+protected final static Map dbAdapters = new 
HashMap<>();
+
+public static final PropertyDescriptor DB_TYPE;
+
+static {
+// Load the DatabaseAdapters
+ArrayList dbAdapterValues = new ArrayList<>();
+ServiceLoader dbAdapterLoader = 
ServiceLoader.load(DatabaseAdapter.class);
+dbAdapterLoader.forEach(it -> {
+dbAdapters.put(it.getName(), it);
+dbAdapterValues.add(new AllowableValue(it.getName(), it.getName(), 
it.getDescription()));
+});
+
+DB_TYPE = new PropertyDescriptor.Builder()
+.name("db-type")
+.displayName("Database Type")
+.description("The type/flavor of database, used for generating 
database-specific code. In many cases the Generic type "
+

[jira] [Updated] (NIFI-10553) MergeContent Prematurely Evicts Bins

2022-09-27 Thread Eric Secules (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Secules updated NIFI-10553:

Description: 
When NiFi's merge processors are configured to defragment, the user wants 
flowfiles merged in a specific way according to the `fragment.` attributes. 
Hoever, when MergeDocuments is handling many unique values for 
`fragment.identifier` it opens up one bin per value until it reaches the 
`MAX_BIN_COUNT` parameter configured on this processor. This parameter is there 
to limit memory used by merging too many things all at once. It is not certain 
that the user will be able to set this to an appropriate value for every flow, 
and the consequence is that evicting a partially filled bin will result in 
possible downstream issues and flowfiles stuck in the input connection of 
MergeDocuments.

 

Instead of this behaviour, the merge processor should penalize and requeue 
flowfiles that don't fit in any of the existing bins if we have reached the max 
number of bins already. Penalizing non-matching flowfiles will give time for 
the ones needed to complete the existing bins to arrive.

I wrote a unit test on my fork of NiFi which covers this bug: 
https://github.com/esecules/nifi/blob/2e5074eabfc0be100491fa007329ce9492382af7/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestMergeContent.java#L1091

  was:
When NiFi's merge processors are configured to defragment, the user wants 
flowfiles merged in a specific way according to the `fragment.` attributes. 
Hoever, when MergeDocuments is handling many unique values for 
`fragment.identifier` it opens up one bin per value until it reaches the 
`MAX_BIN_COUNT` parameter configured on this processor. This parameter is there 
to limit memory used by merging too many things all at once. It is not certain 
that the user will be able to set this to an appropriate value for every flow, 
and the consequence is that evicting a partially filled bin will result in 
possible downstream issues and flowfiles stuck in the input connection of 
MergeDocuments.

 

Instead of this behaviour, the merge processor should penalize and requeue 
flowfiles that don't fit in any of the existing bins if we have reached the max 
number of bins already. Penalizing non-matching flowfiles will give time for 
the ones needed to complete the existing bins to arrive.


> MergeContent Prematurely Evicts Bins
> 
>
> Key: NIFI-10553
> URL: https://issues.apache.org/jira/browse/NIFI-10553
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.14.0, 1.16.3
>Reporter: Eric Secules
>Priority: Major
>
> When NiFi's merge processors are configured to defragment, the user wants 
> flowfiles merged in a specific way according to the `fragment.` attributes. 
> Hoever, when MergeDocuments is handling many unique values for 
> `fragment.identifier` it opens up one bin per value until it reaches the 
> `MAX_BIN_COUNT` parameter configured on this processor. This parameter is 
> there to limit memory used by merging too many things all at once. It is not 
> certain that the user will be able to set this to an appropriate value for 
> every flow, and the consequence is that evicting a partially filled bin will 
> result in possible downstream issues and flowfiles stuck in the input 
> connection of MergeDocuments.
>  
> Instead of this behaviour, the merge processor should penalize and requeue 
> flowfiles that don't fit in any of the existing bins if we have reached the 
> max number of bins already. Penalizing non-matching flowfiles will give time 
> for the ones needed to complete the existing bins to arrive.
> I wrote a unit test on my fork of NiFi which covers this bug: 
> https://github.com/esecules/nifi/blob/2e5074eabfc0be100491fa007329ce9492382af7/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestMergeContent.java#L1091



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10553) MergeContent Prematurely Evicts Bins

2022-09-27 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-10553:

Fix Version/s: (was: 1.18.0)

> MergeContent Prematurely Evicts Bins
> 
>
> Key: NIFI-10553
> URL: https://issues.apache.org/jira/browse/NIFI-10553
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.14.0, 1.16.3
>Reporter: Eric Secules
>Priority: Major
>
> When NiFi's merge processors are configured to defragment, the user wants 
> flowfiles merged in a specific way according to the `fragment.` attributes. 
> Hoever, when MergeDocuments is handling many unique values for 
> `fragment.identifier` it opens up one bin per value until it reaches the 
> `MAX_BIN_COUNT` parameter configured on this processor. This parameter is 
> there to limit memory used by merging too many things all at once. It is not 
> certain that the user will be able to set this to an appropriate value for 
> every flow, and the consequence is that evicting a partially filled bin will 
> result in possible downstream issues and flowfiles stuck in the input 
> connection of MergeDocuments.
>  
> Instead of this behaviour, the merge processor should penalize and requeue 
> flowfiles that don't fit in any of the existing bins if we have reached the 
> max number of bins already. Penalizing non-matching flowfiles will give time 
> for the ones needed to complete the existing bins to arrive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-10553) MergeContent Prematurely Evicts Bins

2022-09-27 Thread Eric Secules (Jira)
Eric Secules created NIFI-10553:
---

 Summary: MergeContent Prematurely Evicts Bins
 Key: NIFI-10553
 URL: https://issues.apache.org/jira/browse/NIFI-10553
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.16.3, 1.14.0
Reporter: Eric Secules
 Fix For: 1.18.0


When NiFi's merge processors are configured to defragment, the user wants 
flowfiles merged in a specific way according to the `fragment.` attributes. 
Hoever, when MergeDocuments is handling many unique values for 
`fragment.identifier` it opens up one bin per value until it reaches the 
`MAX_BIN_COUNT` parameter configured on this processor. This parameter is there 
to limit memory used by merging too many things all at once. It is not certain 
that the user will be able to set this to an appropriate value for every flow, 
and the consequence is that evicting a partially filled bin will result in 
possible downstream issues and flowfiles stuck in the input connection of 
MergeDocuments.

 

Instead of this behaviour, the merge processor should penalize and requeue 
flowfiles that don't fit in any of the existing bins if we have reached the max 
number of bins already. Penalizing non-matching flowfiles will give time for 
the ones needed to complete the existing bins to arrive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] turcsanyip commented on a diff in pull request #6319: NIFI-10381 Refactor Azure Event Hubs components with current SDK

2022-09-27 Thread GitBox


turcsanyip commented on code in PR #6319:
URL: https://github.com/apache/nifi/pull/6319#discussion_r98199


##
nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/eventhub/position/LegacyBlobStorageEventPositionProvider.java:
##
@@ -0,0 +1,154 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.azure.eventhub.position;
+
+import com.azure.core.util.BinaryData;
+import com.azure.messaging.eventhubs.models.EventPosition;
+import com.azure.storage.blob.BlobAsyncClient;
+import com.azure.storage.blob.BlobContainerAsyncClient;
+import com.azure.storage.blob.models.BlobItem;
+import com.azure.storage.blob.models.BlobListDetails;
+import com.azure.storage.blob.models.ListBlobsOptions;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Event Position Provider using Azure Blob Storage implemented in Azure Event 
Hubs SDK Version 3
+ */
+public class LegacyBlobStorageEventPositionProvider implements 
EventPositionProvider {
+private static final String LEASE_SEQUENCE_NUMBER_FIELD = "sequenceNumber";
+
+private static final Logger logger = 
LoggerFactory.getLogger(LegacyBlobStorageEventPositionProvider.class);
+
+private static final ObjectMapper objectMapper = new ObjectMapper();
+
+private final BlobContainerAsyncClient blobContainerAsyncClient;
+
+private final String consumerGroup;
+
+public LegacyBlobStorageEventPositionProvider(
+final BlobContainerAsyncClient blobContainerAsyncClient,
+final String consumerGroup
+) {
+this.blobContainerAsyncClient = 
Objects.requireNonNull(blobContainerAsyncClient, "Client required");
+this.consumerGroup = Objects.requireNonNull(consumerGroup, "Consumer 
Group required");
+}
+
+/**
+ * Get Initial Partition Event Position using Azure Blob Storage as 
persisted in
+ * 
com.microsoft.azure.eventprocessorhost.AzureStorageCheckpointLeaseManager
+ *
+ * @return Map of Partition and Event Position or empty when no 
checkpoints found
+ */
+@Override
+public Map getInitialPartitionEventPosition() {
+final Map partitionEventPosition;
+
+if (containerExists()) {
+final BlobListDetails blobListDetails = new 
BlobListDetails().setRetrieveMetadata(true);
+final ListBlobsOptions listBlobsOptions = new 
ListBlobsOptions().setPrefix(consumerGroup).setDetails(blobListDetails);
+final Iterable blobItems = 
blobContainerAsyncClient.listBlobs(listBlobsOptions).toIterable();
+partitionEventPosition = getPartitionEventPosition(blobItems);
+} else {
+partitionEventPosition = Collections.emptyMap();
+}
+
+return partitionEventPosition;
+}
+
+private Map getPartitionEventPosition(final 
Iterable blobItems) {
+final Map partitionEventPosition = new 
LinkedHashMap<>();
+
+for (final BlobItem blobItem : blobItems) {
+if (Boolean.TRUE.equals(blobItem.isPrefix())) {
+continue;
+}
+
+final String partitionId = getPartitionId(blobItem);
+final EventPosition eventPosition = getEventPosition(blobItem);
+if (eventPosition == null) {
+logger.info("Legacy Event Position not found for Partition 
[{}] Blob [{}]", partitionId, blobItem.getName());
+} else {
+partitionEventPosition.put(partitionId, eventPosition);
+}
+}
+
+return partitionEventPosition;
+}
+
+private String getPartitionId(final BlobItem blobItem) {
+final String blobItemName = blobItem.getName();
+final Path blobItemPath = Paths.get(blobItemName);
+final Path blobItemFileName = blobItemPath.getFileName();
+return 

[jira] [Updated] (NIFI-10521) Conduct Apache NiFi 1.18.0 Release

2022-09-27 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-10521:

Summary: Conduct Apache NiFi 1.18.0 Release  (was: Apache NiFi 1.18.0 RC1 
RM)

> Conduct Apache NiFi 1.18.0 Release
> --
>
> Key: NIFI-10521
> URL: https://issues.apache.org/jira/browse/NIFI-10521
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10521) Apache NiFi 1.18.0 RC1 RM

2022-09-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610237#comment-17610237
 ] 

ASF subversion and git services commented on NIFI-10521:


Commit fdd94009b3c903d726a02500b61b58d22039b500 in nifi's branch 
refs/heads/main from Joe Witt
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=fdd94009b3 ]

NIFI-10521 updating docker refs


> Apache NiFi 1.18.0 RC1 RM
> -
>
> Key: NIFI-10521
> URL: https://issues.apache.org/jira/browse/NIFI-10521
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Joe Witt
>Assignee: Joe Witt
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-10552) Ranger CredentialBuilder throws NoClassDefFoundException

2022-09-27 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt resolved NIFI-10552.
-
Resolution: Fixed

> Ranger CredentialBuilder throws NoClassDefFoundException
> 
>
> Key: NIFI-10552
> URL: https://issues.apache.org/jira/browse/NIFI-10552
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
> Fix For: 1.18.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When try to create a ranger keystore/truststore I got the following exception:
> {code}
> java.lang.NoClassDefFoundError: org/apache/commons/lang3/StringUtils
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.getName(MutableMetricsFactory.java:134)
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.getInfo(MutableMetricsFactory.java:130)
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.newForField(MutableMetricsFactory.java:45)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.add(MetricsSourceBuilder.java:147)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.(MetricsSourceBuilder.java:69)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:43)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)
>   at 
> org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:71)
>   at 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.create(UserGroupInformation.java:149)
>   at 
> org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:265)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3614)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3604)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3441)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.initFileSystem(JavaKeyStoreProvider.java:89)
>   at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:49)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:41)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:73)
>   at 
> org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:74)
>   at 
> org.apache.ranger.credentialapi.buildks.createCredential(buildks.java:87)
>   at org.apache.ranger.credentialapi.buildks.main(buildks.java:41)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.lang3.StringUtils
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
>   ... 24 more
> {code}
> Command to reproduce:
> {code}
> java -cp "ext/ranger/install/lib/*" org.apache.ranger.credentialapi.buildks 
> create sslTrustStore -value "test" -provider 
> "jceks://file/<>/test2.jceks"
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10552) Ranger CredentialBuilder throws NoClassDefFoundException

2022-09-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610184#comment-17610184
 ] 

ASF subversion and git services commented on NIFI-10552:


Commit b28a211bf36d4d6e6e73eb14627def23ac8471af in nifi's branch 
refs/heads/main from Zoltan Kornel Torok
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b28a211bf3 ]

NIFI-10552 This closes #6453. Fix ranger credential builder NoClassDefFoundError

Signed-off-by: Joe Witt 


> Ranger CredentialBuilder throws NoClassDefFoundException
> 
>
> Key: NIFI-10552
> URL: https://issues.apache.org/jira/browse/NIFI-10552
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
> Fix For: 1.18.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When try to create a ranger keystore/truststore I got the following exception:
> {code}
> java.lang.NoClassDefFoundError: org/apache/commons/lang3/StringUtils
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.getName(MutableMetricsFactory.java:134)
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.getInfo(MutableMetricsFactory.java:130)
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.newForField(MutableMetricsFactory.java:45)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.add(MetricsSourceBuilder.java:147)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.(MetricsSourceBuilder.java:69)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:43)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)
>   at 
> org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:71)
>   at 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.create(UserGroupInformation.java:149)
>   at 
> org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:265)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3614)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3604)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3441)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.initFileSystem(JavaKeyStoreProvider.java:89)
>   at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:49)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:41)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:73)
>   at 
> org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:74)
>   at 
> org.apache.ranger.credentialapi.buildks.createCredential(buildks.java:87)
>   at org.apache.ranger.credentialapi.buildks.main(buildks.java:41)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.lang3.StringUtils
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
>   ... 24 more
> {code}
> Command to reproduce:
> {code}
> java -cp "ext/ranger/install/lib/*" org.apache.ranger.credentialapi.buildks 
> create sslTrustStore -value "test" -provider 
> "jceks://file/<>/test2.jceks"
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] asfgit closed pull request #6453: NIFI-10552 Fix ranger credential builder NoClassDefFoundError

2022-09-27 Thread GitBox


asfgit closed pull request #6453: NIFI-10552 Fix ranger credential builder 
NoClassDefFoundError
URL: https://github.com/apache/nifi/pull/6453


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-10528) Extract JSON readers to util module for use in Salesforce NAR

2022-09-27 Thread Tamas Palfy (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Palfy updated NIFI-10528:
---
Fix Version/s: 1.18.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Extract JSON readers to util module for use in Salesforce NAR
> -
>
> Key: NIFI-10528
> URL: https://issues.apache.org/jira/browse/NIFI-10528
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Bryan Bende
>Assignee: Bryan Bende
>Priority: Minor
> Fix For: 1.18.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The Salesforce NAR currently depends on the record serialization services NAR 
> in order to use the JsonTreeRecordReader. We should extract these readers to 
> a util module so that they can be reused and the Salesforce NAR can depend on 
> standard services API NAR.
> Also the OAuth2 API jar is being included when it should be provided from 
> standard services API.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10513) QuerySalesforceObject can only list the first 2000 records

2022-09-27 Thread Tamas Palfy (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Palfy updated NIFI-10513:
---
Fix Version/s: 1.18.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> QuerySalesforceObject can only list the first 2000 records
> --
>
> Key: NIFI-10513
> URL: https://issues.apache.org/jira/browse/NIFI-10513
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
> Fix For: 1.18.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> QuerySalesforceObject does not use pagination to list object with more than 
> 2000 records. In case of more than 2000 records, a page cursor is returned in 
> the response body. In order to capture cursor fields in the future we should 
> extends JsonTreeRowRecordReader with the capability of capturing non-record 
> fields based on a predicate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] tpalfy commented on pull request #6444: NIFI-10513: Added capture non-record fields to JsonTreeRowRecordReader

2022-09-27 Thread GitBox


tpalfy commented on PR #6444:
URL: https://github.com/apache/nifi/pull/6444#issuecomment-1259854828

   LGTM
   Thanks for your work @Lehel44 !
   Merged to main.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-10513) QuerySalesforceObject can only list the first 2000 records

2022-09-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610180#comment-17610180
 ] 

ASF subversion and git services commented on NIFI-10513:


Commit 63aac1a31d5d35fb133d5768abf99201964a16b4 in nifi's branch 
refs/heads/main from Lehel Boér
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=63aac1a31d ]

NIFI-10513: Added capture non-record fields to JsonTreeRowRecordReader, added 
pagination to QuerySalesforceObject

This closes #6444.

Signed-off-by: Tamas Palfy 


> QuerySalesforceObject can only list the first 2000 records
> --
>
> Key: NIFI-10513
> URL: https://issues.apache.org/jira/browse/NIFI-10513
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> QuerySalesforceObject does not use pagination to list object with more than 
> 2000 records. In case of more than 2000 records, a page cursor is returned in 
> the response body. In order to capture cursor fields in the future we should 
> extends JsonTreeRowRecordReader with the capability of capturing non-record 
> fields based on a predicate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] asfgit closed pull request #6444: NIFI-10513: Added capture non-record fields to JsonTreeRowRecordReader

2022-09-27 Thread GitBox


asfgit closed pull request #6444: NIFI-10513: Added capture non-record fields 
to JsonTreeRowRecordReader
URL: https://github.com/apache/nifi/pull/6444


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (NIFI-10523) Additional documentation for List/FetchGoogleDive

2022-09-27 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi reassigned NIFI-10523:
--

Assignee: Tamas Palfy

> Additional documentation for List/FetchGoogleDive
> -
>
> Key: NIFI-10523
> URL: https://issues.apache.org/jira/browse/NIFI-10523
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Update used documentation with the following info:
>  * How to get the Folder ID
>  * How to enable Google Drive API in GCP
>  * How to grant access to a Google Drive folder for a Service Account



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-10523) Additional documentation for List/FetchGoogleDive

2022-09-27 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi resolved NIFI-10523.

Fix Version/s: 1.18.0
   Resolution: Fixed

> Additional documentation for List/FetchGoogleDive
> -
>
> Key: NIFI-10523
> URL: https://issues.apache.org/jira/browse/NIFI-10523
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Tamas Palfy
>Assignee: Tamas Palfy
>Priority: Major
> Fix For: 1.18.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Update used documentation with the following info:
>  * How to get the Folder ID
>  * How to enable Google Drive API in GCP
>  * How to grant access to a Google Drive folder for a Service Account



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10523) Additional documentation for List/FetchGoogleDive

2022-09-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610175#comment-17610175
 ] 

ASF subversion and git services commented on NIFI-10523:


Commit 1d9e119084bdfa82796f1ccd50f8d030c2758be5 in nifi's branch 
refs/heads/main from Tamas Palfy
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1d9e119084 ]

NIFI-10523 - Improved Google Drive processor documentations.

This closes #6430.

Signed-off-by: Peter Turcsanyi 


> Additional documentation for List/FetchGoogleDive
> -
>
> Key: NIFI-10523
> URL: https://issues.apache.org/jira/browse/NIFI-10523
> Project: Apache NiFi
>  Issue Type: Task
>Reporter: Tamas Palfy
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Update used documentation with the following info:
>  * How to get the Folder ID
>  * How to enable Google Drive API in GCP
>  * How to grant access to a Google Drive folder for a Service Account



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] asfgit closed pull request #6430: NIFI-10523 - Udated Google Drive processor documentations.

2022-09-27 Thread GitBox


asfgit closed pull request #6430: NIFI-10523 - Udated Google Drive processor 
documentations.
URL: https://github.com/apache/nifi/pull/6430


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] joewitt commented on pull request #6453: NIFI-10552 Fix ranger credential builder NoClassDefFoundError

2022-09-27 Thread GitBox


joewitt commented on PR #6453:
URL: https://github.com/apache/nifi/pull/6453#issuecomment-1259804074

   thanks - will review/merge


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi] taz1988 opened a new pull request, #6453: NIFI-10552 Fix ranger credential builder NoClassDefFoundError

2022-09-27 Thread GitBox


taz1988 opened a new pull request, #6453:
URL: https://github.com/apache/nifi/pull/6453

   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-10552](https://issues.apache.org/jira/browse/NIFI-10552)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI-10552) 
issue created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 8
 - [ ] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-10552) Ranger CredentialBuilder throws NoClassDefFoundException

2022-09-27 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-10552:

Fix Version/s: 1.18.0

> Ranger CredentialBuilder throws NoClassDefFoundException
> 
>
> Key: NIFI-10552
> URL: https://issues.apache.org/jira/browse/NIFI-10552
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
> Fix For: 1.18.0
>
>
> When try to create a ranger keystore/truststore I got the following exception:
> {code}
> java.lang.NoClassDefFoundError: org/apache/commons/lang3/StringUtils
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.getName(MutableMetricsFactory.java:134)
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.getInfo(MutableMetricsFactory.java:130)
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.newForField(MutableMetricsFactory.java:45)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.add(MetricsSourceBuilder.java:147)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.(MetricsSourceBuilder.java:69)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:43)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)
>   at 
> org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:71)
>   at 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.create(UserGroupInformation.java:149)
>   at 
> org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:265)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3614)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3604)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3441)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.initFileSystem(JavaKeyStoreProvider.java:89)
>   at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:49)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:41)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:73)
>   at 
> org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:74)
>   at 
> org.apache.ranger.credentialapi.buildks.createCredential(buildks.java:87)
>   at org.apache.ranger.credentialapi.buildks.main(buildks.java:41)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.lang3.StringUtils
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
>   ... 24 more
> {code}
> Command to reproduce:
> {code}
> java -cp "ext/ranger/install/lib/*" org.apache.ranger.credentialapi.buildks 
> create sslTrustStore -value "test" -provider 
> "jceks://file/<>/test2.jceks"
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (NIFI-10552) Ranger CredentialBuilder throws NoClassDefFoundException

2022-09-27 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltán Kornél Török reassigned NIFI-10552:
--

Assignee: Zoltán Kornél Török

> Ranger CredentialBuilder throws NoClassDefFoundException
> 
>
> Key: NIFI-10552
> URL: https://issues.apache.org/jira/browse/NIFI-10552
> Project: Apache NiFi
>  Issue Type: Bug
>Reporter: Zoltán Kornél Török
>Assignee: Zoltán Kornél Török
>Priority: Major
>
> When try to create a ranger keystore/truststore I got the following exception:
> {code}
> java.lang.NoClassDefFoundError: org/apache/commons/lang3/StringUtils
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.getName(MutableMetricsFactory.java:134)
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.getInfo(MutableMetricsFactory.java:130)
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.newForField(MutableMetricsFactory.java:45)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.add(MetricsSourceBuilder.java:147)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.(MetricsSourceBuilder.java:69)
>   at 
> org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:43)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)
>   at 
> org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:71)
>   at 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.create(UserGroupInformation.java:149)
>   at 
> org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:265)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3614)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3604)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3441)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.initFileSystem(JavaKeyStoreProvider.java:89)
>   at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:49)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:41)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:73)
>   at 
> org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:74)
>   at 
> org.apache.ranger.credentialapi.buildks.createCredential(buildks.java:87)
>   at org.apache.ranger.credentialapi.buildks.main(buildks.java:41)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.lang3.StringUtils
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
>   ... 24 more
> {code}
> Command to reproduce:
> {code}
> java -cp "ext/ranger/install/lib/*" org.apache.ranger.credentialapi.buildks 
> create sslTrustStore -value "test" -provider 
> "jceks://file/<>/test2.jceks"
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-10552) Ranger CredentialBuilder throws NoClassDefFoundException

2022-09-27 Thread Jira
Zoltán Kornél Török created NIFI-10552:
--

 Summary: Ranger CredentialBuilder throws NoClassDefFoundException
 Key: NIFI-10552
 URL: https://issues.apache.org/jira/browse/NIFI-10552
 Project: Apache NiFi
  Issue Type: Bug
Reporter: Zoltán Kornél Török


When try to create a ranger keystore/truststore I got the following exception:
{code}
java.lang.NoClassDefFoundError: org/apache/commons/lang3/StringUtils
at 
org.apache.hadoop.metrics2.lib.MutableMetricsFactory.getName(MutableMetricsFactory.java:134)
at 
org.apache.hadoop.metrics2.lib.MutableMetricsFactory.getInfo(MutableMetricsFactory.java:130)
at 
org.apache.hadoop.metrics2.lib.MutableMetricsFactory.newForField(MutableMetricsFactory.java:45)
at 
org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.add(MetricsSourceBuilder.java:147)
at 
org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.(MetricsSourceBuilder.java:69)
at 
org.apache.hadoop.metrics2.lib.MetricsAnnotations.newSourceBuilder(MetricsAnnotations.java:43)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)
at 
org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:71)
at 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.create(UserGroupInformation.java:149)
at 
org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:265)
at 
org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3614)
at 
org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:3604)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3441)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.initFileSystem(JavaKeyStoreProvider.java:89)
at 
org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:49)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:41)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100)
at 
org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:73)
at 
org.apache.ranger.credentialapi.CredentialReader.getDecryptedString(CredentialReader.java:74)
at 
org.apache.ranger.credentialapi.buildks.createCredential(buildks.java:87)
at org.apache.ranger.credentialapi.buildks.main(buildks.java:41)
Caused by: java.lang.ClassNotFoundException: 
org.apache.commons.lang3.StringUtils
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 24 more
{code}

Command to reproduce:
{code}
java -cp "ext/ranger/install/lib/*" org.apache.ranger.credentialapi.buildks 
create sslTrustStore -value "test" -provider 
"jceks://file/<>/test2.jceks"
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10442) Create PutIceberg processor

2022-09-27 Thread Joe Witt (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610140#comment-17610140
 ] 

Joe Witt commented on NIFI-10442:
-

Good work ongoing but going to punt from 1.18 and get the RC1 going.  Certainly 
if this shows up I'll grab it

> Create PutIceberg processor
> ---
>
> Key: NIFI-10442
> URL: https://issues.apache.org/jira/browse/NIFI-10442
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Bathori
>Assignee: Mark Bathori
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Add a processor that is able to ingest data into Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10442) Create PutIceberg processor

2022-09-27 Thread Joe Witt (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Witt updated NIFI-10442:

Fix Version/s: (was: 1.18.0)

> Create PutIceberg processor
> ---
>
> Key: NIFI-10442
> URL: https://issues.apache.org/jira/browse/NIFI-10442
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Mark Bathori
>Assignee: Mark Bathori
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Add a processor that is able to ingest data into Iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10551) Improve GetHubSpot documentation

2022-09-27 Thread Peter Turcsanyi (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Turcsanyi updated NIFI-10551:
---
Fix Version/s: 1.18.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Improve GetHubSpot documentation
> 
>
> Key: NIFI-10551
> URL: https://issues.apache.org/jira/browse/NIFI-10551
> Project: Apache NiFi
>  Issue Type: Improvement
> Environment: Improve GetHubSpot documentation with incremental 
> loading capabilities.
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
> Fix For: 1.18.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] asfgit closed pull request #6452: NIFI-10551: Improve GetHubSpot documentation

2022-09-27 Thread GitBox


asfgit closed pull request #6452: NIFI-10551: Improve GetHubSpot documentation
URL: https://github.com/apache/nifi/pull/6452


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (NIFI-10551) Improve GetHubSpot documentation

2022-09-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610105#comment-17610105
 ] 

ASF subversion and git services commented on NIFI-10551:


Commit f14f940389346a2b6e6a1940accbda69cde62ab9 in nifi's branch 
refs/heads/main from Lehel Boér
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=f14f940389 ]

NIFI-10551: Improve GetHubSpot documentation

This closes #6452.

Signed-off-by: Peter Turcsanyi 


> Improve GetHubSpot documentation
> 
>
> Key: NIFI-10551
> URL: https://issues.apache.org/jira/browse/NIFI-10551
> Project: Apache NiFi
>  Issue Type: Improvement
> Environment: Improve GetHubSpot documentation with incremental 
> loading capabilities.
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] r65535 commented on pull request #6337: NIFI-7392: Add ValidateJson processor to standard bundle

2022-09-27 Thread GitBox


r65535 commented on PR #6337:
URL: https://github.com/apache/nifi/pull/6337#issuecomment-1259586786

   @exceptionfactory - do you mind taking a look at this PR? I'm keen to get it 
merged into the nifi code base, if possible!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] adam-markovics commented on a diff in pull request #1423: MINIFICPP-1941 Upgrade OPC UA library to version 1.3.3

2022-09-27 Thread GitBox


adam-markovics commented on code in PR #1423:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1423#discussion_r981254953


##
extensions/opc/include/opc.h:
##
@@ -55,22 +55,22 @@ class Client {
   ~Client();
   NodeData getNodeData(const UA_ReferenceDescription *ref, const std::string& 
basePath = "");
   UA_ReferenceDescription * getNodeReference(UA_NodeId nodeId);
-  void traverse(UA_NodeId nodeId, std::function cb, 
const std::string& basePath = "", uint64_t maxDepth = 0, bool fetchRoot = true);
+  void traverse(UA_NodeId nodeId, const std::function& 
cb, const std::string& basePath = "", uint64_t maxDepth = 0, bool fetchRoot = 
true);
   bool exists(UA_NodeId nodeId);
   UA_StatusCode translateBrowsePathsToNodeIdsRequest(const std::string& path, 
std::vector& foundNodeIDs, const 
std::shared_ptr& logger);
 
   template
   UA_StatusCode update_node(const UA_NodeId nodeId, T value);
 
   template
-  UA_StatusCode add_node(const UA_NodeId parentNodeId, const UA_NodeId 
targetNodeId, std::string browseName, T value, UA_NodeId *receivedNodeId);
+  UA_StatusCode add_node(const UA_NodeId parentNodeId, const UA_NodeId 
targetNodeId, const std::string& browseName, T value, UA_NodeId 
*receivedNodeId);

Review Comment:
   std::string_view could be used here, it would not allocate on a const char*



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] martinzink opened a new pull request, #1424: MINIFICPP-1862 use std::filesystem::path instead of std::string where appropriate…

2022-09-27 Thread GitBox


martinzink opened a new pull request, #1424:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1424

   …-1636
   
   Change leftover boost filesystem usages from MINIFICPP-1636
   
   
   ---
   
   Thank you for submitting a contribution to Apache NiFi - MiNiFi C++.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [x] Is there a JIRA ticket associated with this PR? Is it referenced
in the commit message?
   
   - [x] Does your PR title start with MINIFICPP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
   
   - [x] Has your PR been rebased against the latest commit within the target 
branch (typically main)?
   
   - [x] Is your initial contribution a single, squashed commit?
   
   ### For code changes:
   - [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the LICENSE file?
   - [x] If applicable, have you updated the NOTICE file?
   
   ### For documentation related changes:
   - [x] Have you ensured that format looks appropriate for the output in which 
it is rendered?
   
   ### Note:
   Please ensure that once the PR is submitted, you check GitHub Actions CI 
results for build issues and submit an update to your PR as soon as possible.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (MINIFICPP-1934) Implement PutTCP processor

2022-09-27 Thread Martin Zink (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Zink updated MINIFICPP-1934:
---
Status: Patch Available  (was: In Progress)

https://github.com/apache/nifi-minifi-cpp/pull/1419

> Implement PutTCP processor
> --
>
> Key: MINIFICPP-1934
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1934
> Project: Apache NiFi MiNiFi C++
>  Issue Type: New Feature
>Reporter: Marton Szasz
>Assignee: Martin Zink
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MINIFICPP-1923) Refactor PutUDP to use asio

2022-09-27 Thread Martin Zink (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Zink updated MINIFICPP-1923:
---
Status: Patch Available  (was: In Progress)

https://github.com/apache/nifi-minifi-cpp/pull/1412

> Refactor PutUDP to use asio
> ---
>
> Key: MINIFICPP-1923
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1923
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Martin Zink
>Assignee: Martin Zink
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When PutUDP processor was implemented we didnt have asio dependency, but now 
> that we do, using it could simplify the code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MINIFICPP-1939) Enable dual stack listening on ListenTCP and ListenSyslog

2022-09-27 Thread Martin Zink (Jira)


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martin Zink updated MINIFICPP-1939:
---
Status: Patch Available  (was: In Progress)

https://github.com/apache/nifi-minifi-cpp/pull/1412

> Enable dual stack listening on ListenTCP and ListenSyslog
> -
>
> Key: MINIFICPP-1939
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1939
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Improvement
>Reporter: Martin Zink
>Assignee: Martin Zink
>Priority: Major
>
> Currently ListenTCP and ListenSyslog listen on ipv4 only.
> All supported platforms support dual stack mode so we should listen on ipv6 
> which will handle both ipv4 and ipv6 traffic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MINIFICPP-1815) PersistenceTests transiently fails

2022-09-27 Thread Ferenc Gerlits (Jira)


[ 
https://issues.apache.org/jira/browse/MINIFICPP-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610050#comment-17610050
 ] 

Ferenc Gerlits commented on MINIFICPP-1815:
---

There are probably multiple issues with this test.

https://github.com/apache/nifi-minifi-cpp/pull/1418 fixed one of them, but 
there must be others, since the test still fails sometimes in CI jobs, e.g. 
https://github.com/apache/nifi-minifi-cpp/actions/runs/3106745250/jobs/5047355778

> PersistenceTests transiently fails
> --
>
> Key: MINIFICPP-1815
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1815
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Gábor Gyimesi
>Assignee: Ferenc Gerlits
>Priority: Minor
> Attachments: PersistenceTests_failure_windows.log, 
> persistancetest-failure-ubuntu.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> More info in attached logs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MINIFICPP-1942) Implement MQTT 5 Request-Response pattern

2022-09-27 Thread Jira
Ádám Markovics created MINIFICPP-1942:
-

 Summary: Implement MQTT 5 Request-Response pattern
 Key: MINIFICPP-1942
 URL: https://issues.apache.org/jira/browse/MINIFICPP-1942
 Project: Apache NiFi MiNiFi C++
  Issue Type: Improvement
Reporter: Ádám Markovics
Assignee: Ádám Markovics


Possibly as a new processor.
https://www.hivemq.com/blog/mqtt5-essentials-part9-request-response-pattern/

onSchedule():
- get response information from broker

onTrigger():
- subscribe to response topic
- publish with correlation data and response topic set
- wait for response

Questions:
- what should happen to incoming flow files? replace content with response or 
create new ones instead?
- how should response topic be set? concatenate to response information?

Further ideas:
- also implement for Last Will in CONNECT packet



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-4890) OIDC Token Refresh is not done correctly

2022-09-27 Thread Jonny Newald (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-4890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17610042#comment-17610042
 ] 

Jonny Newald commented on NIFI-4890:


Hi folks. Is there any workaround for this?

I wonder why there aren't many more complaints about that. It's really not 
usable...

> OIDC Token Refresh is not done correctly
> 
>
> Key: NIFI-4890
> URL: https://issues.apache.org/jira/browse/NIFI-4890
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.5.0
> Environment: Environment:
> Browser: Chrome / Firefox 
> Configuration of NiFi: 
> - SSL certificate for the server (no client auth) 
> - OIDC configuration including end_session_endpoint (see the link 
> https://auth.s.orchestracities.com/auth/realms/default/.well-known/openid-configuration)
>  
>Reporter: Federico Michele Facca
>Assignee: Raz Dobkies
>Priority: Major
>
> It looks like the NIFI UI is not refreshing the OIDC token in background, and 
> because of that, when the token expires, tells you that your session is 
> expired. and you need to refresh the page, to get a new token.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (MINIFICPP-1871) ConsumeMQTT fails upon agent restart

2022-09-27 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ádám Markovics resolved MINIFICPP-1871.
---
Resolution: Fixed

> ConsumeMQTT fails upon agent restart
> 
>
> Key: MINIFICPP-1871
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1871
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Andre Araujo
>Assignee: Ádám Markovics
>Priority: Major
>
> I have a flow that is running a ConsumeMQTT processor, successfully connected 
> to the MQTT broker and consuming data. When the agent is restarted the 
> following error is shown on the minifi-app.log:
> {code:java}
> [2022-06-25 12:19:04.189] 
> [org::apache::nifi::minifi::processors::AbstractMQTTProcessor] [error] Failed 
> to subscribe to MQTT topic iot/# (-1)
> {code}
> The ConsumeMQTT processor never recovers from this error.
> After this, if I restart the agent, the ConsumerMQTT processor will manage to 
> reconnect to the broker successfully and start consuming.
> If I restart the agent one more time, the cycle above repeats.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MINIFICPP-1871) ConsumeMQTT fails upon agent restart

2022-09-27 Thread Jira


 [ 
https://issues.apache.org/jira/browse/MINIFICPP-1871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ádám Markovics reassigned MINIFICPP-1871:
-

Assignee: Ádám Markovics  (was: Andre Araujo)

> ConsumeMQTT fails upon agent restart
> 
>
> Key: MINIFICPP-1871
> URL: https://issues.apache.org/jira/browse/MINIFICPP-1871
> Project: Apache NiFi MiNiFi C++
>  Issue Type: Bug
>Reporter: Andre Araujo
>Assignee: Ádám Markovics
>Priority: Major
>
> I have a flow that is running a ConsumeMQTT processor, successfully connected 
> to the MQTT broker and consuming data. When the agent is restarted the 
> following error is shown on the minifi-app.log:
> {code:java}
> [2022-06-25 12:19:04.189] 
> [org::apache::nifi::minifi::processors::AbstractMQTTProcessor] [error] Failed 
> to subscribe to MQTT topic iot/# (-1)
> {code}
> The ConsumeMQTT processor never recovers from this error.
> After this, if I restart the agent, the ConsumerMQTT processor will manage to 
> reconnect to the broker successfully and start consuming.
> If I restart the agent one more time, the cycle above repeats.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] tpalfy commented on a diff in pull request #6444: NIFI-10513: Added capture non-record fields to JsonTreeRowRecordReader

2022-09-27 Thread GitBox


tpalfy commented on code in PR #6444:
URL: https://github.com/apache/nifi/pull/6444#discussion_r981142413


##
nifi-nar-bundles/nifi-salesforce-bundle/nifi-salesforce-processors/src/main/java/org/apache/nifi/processors/salesforce/QuerySalesforceObject.java:
##
@@ -330,76 +334,96 @@ public void onTrigger(final ProcessContext context, final 
ProcessSession session
 ageFilterUpper
 );
 
+AtomicReference nextRecordsUrl = new AtomicReference<>();
+
+do {
+
 FlowFile flowFile = session.create();
 
 Map originalAttributes = flowFile.getAttributes();
 Map attributes = new HashMap<>();
 
 AtomicInteger recordCountHolder = new AtomicInteger();
 
-flowFile = session.write(flowFile, out -> {
-try (
-InputStream querySObjectResultInputStream = 
salesforceRestService.query(querySObject);
-JsonTreeRowRecordReader jsonReader = new 
JsonTreeRowRecordReader(
-querySObjectResultInputStream,
-getLogger(),
-convertedSalesforceSchema.recordSchema,
-DATE_FORMAT,
-TIME_FORMAT,
-DATE_TIME_FORMAT,
-StartingFieldStrategy.NESTED_FIELD,
-STARTING_FIELD_NAME,
-SchemaApplicationStrategy.SELECTED_PART
-);
-
-RecordSetWriter writer = writerFactory.createWriter(
-getLogger(),
-writerFactory.getSchema(
-originalAttributes,
-convertedSalesforceSchema.recordSchema
-),
-out,
-originalAttributes
-)
-) {
-writer.beginRecordSet();
-
-Record querySObjectRecord;
-while ((querySObjectRecord = jsonReader.nextRecord()) != null) 
{
-writer.write(querySObjectRecord);
-}
-
-WriteResult writeResult = writer.finishRecordSet();
-
-attributes.put("record.count", 
String.valueOf(writeResult.getRecordCount()));
-attributes.put(CoreAttributes.MIME_TYPE.key(), 
writer.getMimeType());
-attributes.putAll(writeResult.getAttributes());
 
-recordCountHolder.set(writeResult.getRecordCount());
 
-if (ageFilterUpper != null) {
-Map newState = new 
HashMap<>(state.toMap());
-newState.put(LAST_AGE_FILTER, ageFilterUpper);
-updateState(context, newState);
+flowFile = session.write(flowFile, out -> {
+try (
+InputStream querySObjectResultInputStream = 
getResultInputStream(nextRecordsUrl, querySObject);
+
+JsonTreeRowRecordReader jsonReader = new 
JsonTreeRowRecordReader(
+querySObjectResultInputStream,
+getLogger(),
+convertedSalesforceSchema.recordSchema,
+DATE_FORMAT,
+TIME_FORMAT,
+DATE_TIME_FORMAT,
+StartingFieldStrategy.NESTED_FIELD,
+STARTING_FIELD_NAME,
+SchemaApplicationStrategy.SELECTED_PART,
+CAPTURE_PREDICATE
+);
+
+RecordSetWriter writer = writerFactory.createWriter(
+getLogger(),
+writerFactory.getSchema(
+originalAttributes,
+convertedSalesforceSchema.recordSchema
+),
+out,
+originalAttributes
+)
+) {
+writer.beginRecordSet();
+
+Record querySObjectRecord;
+while ((querySObjectRecord = jsonReader.nextRecord()) != 
null) {
+writer.write(querySObjectRecord);
+}
+
+WriteResult writeResult = writer.finishRecordSet();
+
+Map storedFields = 
jsonReader.getCapturedFields();
+
+nextRecordsUrl.set(storedFields.getOrDefault(CURSOR_URL, 
null));

Review Comment:
   ```suggestion
   Map capturedFields = 
jsonReader.getCapturedFields();
   
   
nextRecordsUrl.set(capturedFields.getOrDefault(CURSOR_URL, null));
   ```



##

[GitHub] [nifi] r65535 commented on a diff in pull request #6337: NIFI-7392: Add ValidateJson processor to standard bundle

2022-09-27 Thread GitBox


r65535 commented on code in PR #6337:
URL: https://github.com/apache/nifi/pull/6337#discussion_r981135645


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateJson.java:
##
@@ -0,0 +1,203 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.networknt.schema.JsonSchema;
+import com.networknt.schema.JsonSchemaFactory;
+import com.networknt.schema.ValidationMessage;
+import com.networknt.schema.SpecVersion.VersionFlag;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"JSON", "schema", "validation"})
+@WritesAttributes({
+@WritesAttribute(attribute = "validatejson.invalid.error", description = 
"If the flow file is routed to the invalid relationship "
++ "the attribute will contain the error message resulting from the 
validation failure.")
+})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified JSON Schema file")
+public class ValidateJson extends AbstractProcessor {
+
+public static final String ERROR_ATTRIBUTE_KEY = 
"validatejson.invalid.error";
+
+public static final AllowableValue SCHEMA_VERSION_4 = new 
AllowableValue("V4");
+public static final AllowableValue SCHEMA_VERSION_6 = new 
AllowableValue("V6");
+public static final AllowableValue SCHEMA_VERSION_7 = new 
AllowableValue("V7");
+public static final AllowableValue SCHEMA_VERSION_V201909 = new 
AllowableValue("V201909");
+
+public static final PropertyDescriptor SCHEMA_VERSION = new 
PropertyDescriptor
+.Builder().name("SCHEMA_VERSION")
+.displayName("Schema Version")
+.description("The JSON schema specification")
+.required(true)
+.allowableValues(SCHEMA_VERSION_4, SCHEMA_VERSION_6, SCHEMA_VERSION_7, 
SCHEMA_VERSION_V201909)
+.defaultValue(SCHEMA_VERSION_V201909.getValue())
+.build();
+
+public static final PropertyDescriptor SCHEMA_TEXT = new PropertyDescriptor
+.Builder().name("SCHEMA_TEXT")
+.displayName("Schema Text")
+.description("The text of a JSON schema")
+.required(true)
+.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final Relationship REL_VALID = new Relationship.Builder()
+.name("valid")
+.description("FlowFiles that are successfully validated against the 
schema are routed to this relationship")
+.build();
+
+public static final Relationship REL_INVALID = new 

[GitHub] [nifi] r65535 commented on a diff in pull request #6337: NIFI-7392: Add ValidateJson processor to standard bundle

2022-09-27 Thread GitBox


r65535 commented on code in PR #6337:
URL: https://github.com/apache/nifi/pull/6337#discussion_r981134430


##
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateJson.java:
##
@@ -0,0 +1,203 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.standard;
+
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.expression.ExpressionLanguageScope;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.behavior.SideEffectFree;
+import org.apache.nifi.annotation.behavior.SupportsBatching;
+import org.apache.nifi.annotation.behavior.WritesAttribute;
+import org.apache.nifi.annotation.behavior.WritesAttributes;
+import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.AbstractProcessor;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.util.StandardValidators;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.networknt.schema.JsonSchema;
+import com.networknt.schema.JsonSchemaFactory;
+import com.networknt.schema.ValidationMessage;
+import com.networknt.schema.SpecVersion.VersionFlag;
+
+@EventDriven
+@SideEffectFree
+@SupportsBatching
+@InputRequirement(Requirement.INPUT_REQUIRED)
+@Tags({"JSON", "schema", "validation"})
+@WritesAttributes({
+@WritesAttribute(attribute = "validatejson.invalid.error", description = 
"If the flow file is routed to the invalid relationship "
++ "the attribute will contain the error message resulting from the 
validation failure.")
+})
+@CapabilityDescription("Validates the contents of FlowFiles against a 
user-specified JSON Schema file")
+public class ValidateJson extends AbstractProcessor {
+
+public static final String ERROR_ATTRIBUTE_KEY = 
"validatejson.invalid.error";
+
+public static final AllowableValue SCHEMA_VERSION_4 = new 
AllowableValue("V4");
+public static final AllowableValue SCHEMA_VERSION_6 = new 
AllowableValue("V6");
+public static final AllowableValue SCHEMA_VERSION_7 = new 
AllowableValue("V7");
+public static final AllowableValue SCHEMA_VERSION_V201909 = new 
AllowableValue("V201909");
+
+public static final PropertyDescriptor SCHEMA_VERSION = new 
PropertyDescriptor
+.Builder().name("SCHEMA_VERSION")
+.displayName("Schema Version")
+.description("The JSON schema specification")
+.required(true)
+.allowableValues(SCHEMA_VERSION_4, SCHEMA_VERSION_6, SCHEMA_VERSION_7, 
SCHEMA_VERSION_V201909)
+.defaultValue(SCHEMA_VERSION_V201909.getValue())
+.build();
+
+public static final PropertyDescriptor SCHEMA_TEXT = new PropertyDescriptor
+.Builder().name("SCHEMA_TEXT")
+.displayName("Schema Text")
+.description("The text of a JSON schema")
+.required(true)
+.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)

Review Comment:
   Didn't know this existed! I've added this in



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this 

[jira] [Closed] (NIFI-10497) Making RegistryClient an extension point

2022-09-27 Thread Simon Bence (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Bence closed NIFI-10497.
--

> Making RegistryClient an extension point
> 
>
> Key: NIFI-10497
> URL: https://issues.apache.org/jira/browse/NIFI-10497
> Project: Apache NiFi
>  Issue Type: New Feature
>Reporter: Simon Bence
>Assignee: Simon Bence
>Priority: Critical
> Fix For: 1.18.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Currently NiFi is capable to connect only to the NiFi Registry as a "registry 
> repository". It would be beneficial to give NiFi the capability to depend on 
> other services.
> In order to this, I suggest to decouple the registry behaviour from the NiFi 
> Registry as much as it is possible (inlcuding but not only the API and the 
> Resources) and move the actual implementation behind this new API.
> To be able to add new implementations, this new API must be an extension 
> point applying the usual NiFi instruments for this. Also it is paramunt to 
> keep the continuity with the current usages and make the implementation 
> capable to process the current REST call format. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (NIFI-10550) Fixing SSL context service validation for NifiRegistryFlowRegistryClient

2022-09-27 Thread Simon Bence (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Bence closed NIFI-10550.
--

> Fixing SSL context service validation for NifiRegistryFlowRegistryClient
> 
>
> Key: NIFI-10550
> URL: https://issues.apache.org/jira/browse/NIFI-10550
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.18.0
>Reporter: Simon Bence
>Assignee: Simon Bence
>Priority: Major
> Fix For: 1.18.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The validation of the SSL Context Service within 
> NifiRegistryFlowRegistryClient should trigger when the Controller Service is 
> set but only the Trust of Key store is filled not both. The current logit 
> triggers a validation error when both are set instead, this should be fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] turcsanyip commented on a diff in pull request #6452: NIFI-10551: Improve GetHubSpot documentation

2022-09-27 Thread GitBox


turcsanyip commented on code in PR #6452:
URL: https://github.com/apache/nifi/pull/6452#discussion_r981024048


##
nifi-nar-bundles/nifi-hubspot-bundle/nifi-hubspot-processors/src/main/resources/docs/org.apache.nifi.processors.hubspot.GetHubSpot/additionalDetails.html:
##
@@ -32,5 +32,13 @@ Incremental Loading
 last run time of the processor are processed. The processor state can be 
reset in the context menu. The incremental loading
 is based on the objects last modified time.
 
+
+There is no deletion tolerance in the current implementation. Some objects 
may be omitted if any object is deleted between fetching two pages.

Review Comment:
   I think it rather belongs to the `Paging` section so I would move it there.
   Also, I would mention that the deletion tolerance issue is due to the 
limitations of the HubSpot API. Something like this:
   ```suggestion
   
   Due to the page handling mechanism of the HubSpot API, parallel 
deletions are not supported. Some objects may be omitted if any object is 
deleted between fetching two pages.
   ```



##
nifi-nar-bundles/nifi-hubspot-bundle/nifi-hubspot-processors/src/main/java/org/apache/nifi/processors/hubspot/GetHubSpot.java:
##
@@ -116,17 +116,18 @@ public class GetHubSpot extends AbstractProcessor {
 " the previous run time and the current time (optionally 
adjusted by the Incremental Delay property).")
 .required(true)
 .allowableValues("true", "false")
-.defaultValue("false")
+.defaultValue("true")
 .build();
 
 static final PropertyDescriptor INCREMENTAL_DELAY = new 
PropertyDescriptor.Builder()
 .name("incremental-delay")
 .displayName("Incremental Delay")
 .description(("The ending timestamp of the time window will be 
adjusted earlier by the amount configured in this property." +
 " For example, with a property value of 10 seconds, an 
ending timestamp of 12:30:45 would be changed to 12:30:35." +
-" Set this property to avoid missing objects when the 
clock of your local machines and HubSpot servers' clock are not in sync."))
+" Set this property to avoid missing objects when the 
clock of your local machines and HubSpot servers' clock are not in sync" +
+" and to protect against HubSpot's mechanism that changes 
last updated dates after object creation."))

Review Comment:
   ```suggestion
   " and to protect against HubSpot's mechanism that 
changes last updated timestamps after object creation."))
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (NIFI-10551) Improve GetHubSpot documentation

2022-09-27 Thread Jira


 [ 
https://issues.apache.org/jira/browse/NIFI-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lehel Boér updated NIFI-10551:
--
Status: Patch Available  (was: In Progress)

> Improve GetHubSpot documentation
> 
>
> Key: NIFI-10551
> URL: https://issues.apache.org/jira/browse/NIFI-10551
> Project: Apache NiFi
>  Issue Type: Improvement
> Environment: Improve GetHubSpot documentation with incremental 
> loading capabilities.
>Reporter: Lehel Boér
>Assignee: Lehel Boér
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi] Lehel44 opened a new pull request, #6452: NIFI-10551: Improve GetHubSpot documentation

2022-09-27 Thread GitBox


Lehel44 opened a new pull request, #6452:
URL: https://github.com/apache/nifi/pull/6452

   
   
   
   
   
   
   
   
   
   
   
   
   
   # Summary
   
   [NIFI-10551](https://issues.apache.org/jira/browse/NIFI-10551)
   
   # Tracking
   
   Please complete the following tracking steps prior to pull request creation.
   
   ### Issue Tracking
   
   - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI-10551) 
issue created
   
   ### Pull Request Tracking
   
   - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as 
`NIFI-0`
   - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, 
as such `NIFI-0`
   
   ### Pull Request Formatting
   
   - [ ] Pull Request based on current revision of the `main` branch
   - [ ] Pull Request refers to a feature branch with one commit containing 
changes
   
   # Verification
   
   Please indicate the verification steps performed prior to pull request 
creation.
   
   ### Build
   
   - [ ] Build completed using `mvn clean install -P contrib-check`
 - [ ] JDK 8
 - [ ] JDK 11
 - [ ] JDK 17
   
   ### Licensing
   
   - [ ] New dependencies are compatible with the [Apache License 
2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License 
Policy](https://www.apache.org/legal/resolved.html)
   - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` 
files
   
   ### Documentation
   
   - [ ] Documentation formatting appears as expected in rendered files
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (NIFI-10551) Improve GetHubSpot documentation

2022-09-27 Thread Jira
Lehel Boér created NIFI-10551:
-

 Summary: Improve GetHubSpot documentation
 Key: NIFI-10551
 URL: https://issues.apache.org/jira/browse/NIFI-10551
 Project: Apache NiFi
  Issue Type: Improvement
 Environment: Improve GetHubSpot documentation with incremental loading 
capabilities.
Reporter: Lehel Boér
Assignee: Lehel Boér






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-10460) Zendesk Support source connector

2022-09-27 Thread Ferenc Kis (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenc Kis resolved NIFI-10460.
---
Resolution: Fixed

> Zendesk Support source connector
> 
>
> Key: NIFI-10460
> URL: https://issues.apache.org/jira/browse/NIFI-10460
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Ferenc Kis
>Assignee: Ferenc Kis
>Priority: Major
>  Labels: Zendesk
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Connector to incrementally fetch data from Zendesk Support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1412: MINIFICPP-1923 Refactor PutUDP to use asio

2022-09-27 Thread GitBox


martinzink commented on code in PR #1412:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1412#discussion_r980883708


##
extensions/standard-processors/processors/PutUDP.cpp:
##
@@ -107,51 +98,53 @@ void PutUDP::onTrigger(core::ProcessContext* context, 
core::ProcessSession* cons
 return;
   }
 
-  const auto nonthrowing_sockaddr_ntop = [](const sockaddr* const sa) -> 
std::string {
-return utils::try_expression([sa] { return utils::net::sockaddr_ntop(sa); 
}).value_or("(n/a)");
+  asio::io_context io_context;
+
+  const auto resolve_hostname = [_context, , ]() -> 
nonstd::expected {
+udp::resolver resolver(io_context);
+std::error_code error_code;
+auto resolved_query = resolver.resolve(hostname, port, error_code);
+if (error_code)
+  return nonstd::make_unexpected(error_code);
+return resolved_query;
   };
 
-  const auto debug_log_resolved_names = [&, this](const addrinfo& names) -> 
decltype(auto) {
-if (logger_->should_log(core::logging::LOG_LEVEL::debug)) {
-  std::vector names_vector;
-  for (const addrinfo* it =  it; it = it->ai_next) {
-names_vector.push_back(nonthrowing_sockaddr_ntop(it->ai_addr));
+  const auto send_data_to_endpoint = [_context, ,  = 
this->logger_](const udp::resolver::results_type& resolved_query) -> 
nonstd::expected {
+std::error_code error;
+for (const auto& resolver_entry : resolved_query) {
+  error.clear();
+  udp::socket socket(io_context);
+  socket.open(resolver_entry.endpoint().protocol(), error);
+  if (error) {
+logger->log_debug("opening %s socket failed due to %s ", 
resolver_entry.endpoint().protocol() == udp::v4() ? "IPv4" : "IPv6", 
error.message());
+continue;
   }
-  logger_->log_debug("resolved \'%s\' to: %s",
-  hostname,
-  names_vector | ranges::views::join(',') | ranges::to());
+  socket.send_to(asio::buffer(data.buffer), resolver_entry.endpoint(), 
udp::socket::message_flags{}, error);
+  if (error) {
+core::logging::LOG_DEBUG(logger) << "sending to endpoint " << 
resolver_entry.endpoint() << " failed due to " << error.message();

Review Comment:
   As long as we succeed to send to one resolved endpoint, I don't think we 
need to handle these as warnings or errors, since this can happen during normal 
operation. (and the last error will be logged as error)
   
   I've added the success logging in 
https://github.com/apache/nifi-minifi-cpp/pull/1412/commits/3caa5a47fc16dc44dddc298c7f3d20b33fefe6f7
 and 
https://github.com/apache/nifi-minifi-cpp/pull/1412/commits/64dfce35ea2b477ecd8695bd868b184da9696512



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1412: MINIFICPP-1923 Refactor PutUDP to use asio

2022-09-27 Thread GitBox


martinzink commented on code in PR #1412:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1412#discussion_r980883175


##
libminifi/test/Utils.h:
##
@@ -111,24 +111,42 @@ bool countLogOccurrencesUntil(const std::string& pattern,
   return false;
 }
 
-bool sendMessagesViaTCP(const std::vector& contents, 
uint64_t port) {
+bool sendMessagesViaTCP(const std::vector& contents, const 
asio::ip::tcp::endpoint& remote_endpoint) {
   asio::io_context io_context;
   asio::ip::tcp::socket socket(io_context);
-  asio::ip::tcp::endpoint 
remote_endpoint(asio::ip::address::from_string("127.0.0.1"), port);
   socket.connect(remote_endpoint);
   std::error_code err;
   for (auto& content : contents) {
 std::string tcp_message(content);
 tcp_message += '\n';
 asio::write(socket, asio::buffer(tcp_message, tcp_message.size()), err);
   }
-  if (err) {
+  if (err)
+return false;
+  socket.close();
+  return true;

Review Comment:
   Just checked and fortunetly RAII will take of everything so we dont need to 
manually close it anywhere.
   I moved the error checking inside the loop in 
https://github.com/apache/nifi-minifi-cpp/pull/1412/commits/3caa5a47fc16dc44dddc298c7f3d20b33fefe6f7



##
libminifi/test/Utils.h:
##
@@ -111,24 +111,42 @@ bool countLogOccurrencesUntil(const std::string& pattern,
   return false;
 }
 
-bool sendMessagesViaTCP(const std::vector& contents, 
uint64_t port) {
+bool sendMessagesViaTCP(const std::vector& contents, const 
asio::ip::tcp::endpoint& remote_endpoint) {
   asio::io_context io_context;
   asio::ip::tcp::socket socket(io_context);
-  asio::ip::tcp::endpoint 
remote_endpoint(asio::ip::address::from_string("127.0.0.1"), port);
   socket.connect(remote_endpoint);
   std::error_code err;
   for (auto& content : contents) {
 std::string tcp_message(content);
 tcp_message += '\n';
 asio::write(socket, asio::buffer(tcp_message, tcp_message.size()), err);

Review Comment:
   You are right :+1: I moved the error checking inside the loop in 
https://github.com/apache/nifi-minifi-cpp/pull/1412/commits/3caa5a47fc16dc44dddc298c7f3d20b33fefe6f7



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1412: MINIFICPP-1923 Refactor PutUDP to use asio

2022-09-27 Thread GitBox


martinzink commented on code in PR #1412:
URL: https://github.com/apache/nifi-minifi-cpp/pull/1412#discussion_r980867567


##
extensions/standard-processors/tests/unit/ListenSyslogTests.cpp:
##
@@ -197,21 +197,10 @@ constexpr std::string_view rfc5424_logger_example_1 = 
R"(<13>1 2022-03-17T10:10:
 
 constexpr std::string_view invalid_syslog = "not syslog";
 
-void sendUDPPacket(const std::string_view content, uint64_t port) {
-  asio::io_context io_context;
-  asio::ip::udp::socket socket(io_context);
-  asio::ip::udp::endpoint 
remote_endpoint(asio::ip::address::from_string("127.0.0.1"), port);
-  socket.open(asio::ip::udp::v4());
-  std::error_code err;
-  socket.send_to(asio::buffer(content, content.size()), remote_endpoint, 0, 
err);
-  REQUIRE(!err);
-  socket.close();
-}
-
 void check_for_only_basic_attributes(core::FlowFile& flow_file, uint16_t port, 
std::string_view protocol) {
   CHECK(std::to_string(port) == flow_file.getAttribute("syslog.port"));
   CHECK(protocol == flow_file.getAttribute("syslog.protocol"));
-  CHECK("127.0.0.1" == flow_file.getAttribute("syslog.sender"));
+  CHECK((":::127.0.0.1" == flow_file.getAttribute("syslog.sender") || 
"::1" == flow_file.getAttribute("syslog.sender")));

Review Comment:
   the `:::` is a subnet prefix for IPv4 addresses that are placed inside 
IPv6.
   The UDP/TCP Servers now listen on IPv6 with dual stacking enabled, this 
means that the OS listens on ipv4 and ipv6 simultaneously and any request that 
comes through ipv4 will be transferred to ipv6's ipv4 subnet, thus `127.0.0.1` 
(original sender) will become `:::127.0.0.1`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org