[GitHub] [nifi] timeabarna commented on pull request #7353: NIFI-11658 Streamline using single parameter context for nested PGs
timeabarna commented on PR #7353: URL: https://github.com/apache/nifi/pull/7353#issuecomment-1614151688 Thanks @exceptionfactory and @mcgilman for your review, I've updated the PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1591: MINIFICPP-2130 Custom cache eviction strategy for GitHub Actions
szaszm commented on code in PR #1591: URL: https://github.com/apache/nifi-minifi-cpp/pull/1591#discussion_r1236873733 ## .github/workflows/ci.yml: ## @@ -142,6 +154,12 @@ jobs: -DENABLE_AZURE=OFF -DENABLE_SPLUNK=OFF -DENABLE_GCP=OFF -DENABLE_PROCFS=OFF -DENABLE_BUSTACHE=ON -DENABLE_PCAP=ON -DENABLE_JNI=ON -DENABLE_SFTP=ON \ -DENABLE_LUA_SCRIPTING=OFF -DENABLE_PYTHON_SCRIPTING=OFF -DENABLE_MQTT=OFF -DENABLE_ELASTICSEARCH=OFF -DENABLE_KUBERNETES=OFF -DENABLE_OPC=OFF .. make -j$(nproc) VERBOSE=1 + - name: cache save +uses: actions/cache/save@v3 +if: always() +with: + path: ~/.ccache + key: ubuntu-20.04-ccache-${{github.ref}}-${{github.sha}} Review Comment: How did this work before adding the explicit path declaration? Why is the change necessary? ## github_scripts/github_actions_cache_cleanup_tests.py: ## @@ -0,0 +1,80 @@ +#!/bin/python3 + +import unittest +from unittest.mock import MagicMock +from github_actions_cache_cleanup import GithubActionsCacheCleaner + + +class TestGithubActionsCacheCleaner(unittest.TestCase): +def create_mock_github_request_sender(self): +mock = MagicMock() +mock.list_open_pull_requests = MagicMock() +open_pull_requests = [ +{ +"number": "227", +"title": "MINIFICPP-13712 TEST1", +}, +{ +"number": "228", +"title": "MINIFICPP- TEST2", +}, +{ +"number": "229", +"title": "MINIFICPP-123 TEST3", +} +] +mock.list_open_pull_requests.return_value = open_pull_requests +caches = { +"actions_caches": [ +{ +"id": "999", +"key": "macos-xcode-ccache-refs/pull/226/merge-6c8d283f5bc894af8dfc295e5976a5f154753123", +}, +{ +"id": "1", +"key": "ubuntu-20.04-ccache-refs/pull/227/merge-9d6d283f5bc894af8dfc295e5976a5f1b46649c4", +}, +{ +"id": "2", +"key": "ubuntu-20.04-ccache-refs/pull/227/merge-1d6d283f5bc894af8dfc295e5976a5f154753487", +}, +{ +"id": "12345", +"key": "macos-xcode-ccache-refs/pull/227/merge-2d6d283f5bc894af8dfc295e5976a5f154753536", +}, +{ +"id": "1", +"key": "macos-xcode-ccache-refs/heads/MINIFICPP--9d5e183f5bc894af8dfc295e5976a5f1b4664456", +}, +{ +"id": "2", +"key": "macos-xcode-ccache-refs/heads/MINIFICPP--8f4d283f5bc894af8dfc295e5976a5f1b4664123", +}, +{ +"id": "4", +"key": "ubuntu-20.04-all-clang-ccache-refs/heads/main-1d4d283f5bc894af8dfc295e5976a5f1b4664456", +}, +{ +"id": "5", +"key": "ubuntu-20.04-all-clang-ccache-refs/heads/main-2f4d283f5bc894af8dfc295e5976a5f1b4664567", +} Review Comment: Out of these two, how do we determine which one is newer? We should keep the newest cache. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-11714) Add General Error Handling for Jetty Framework Server
[ https://issues.apache.org/jira/browse/NIFI-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Gresock updated NIFI-11714: --- Fix Version/s: 2.0.0 1.23.0 (was: 1.latest) (was: 2.latest) Resolution: Fixed Status: Resolved (was: Patch Available) > Add General Error Handling for Jetty Framework Server > - > > Key: NIFI-11714 > URL: https://issues.apache.org/jira/browse/NIFI-11714 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 2.0.0, 1.23.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The Jetty server that supports framework operations should be updated to > include generalized error handling that avoids writing stack traces. > Application REST resources support error handling and simplified messages, > but Jetty handles some exceptions that can result from malformed HTTP > requests. Implementing fallback error handling for Jetty will avoid providing > stack trace information and other information to HTTP clients. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11714) Add General Error Handling for Jetty Framework Server
[ https://issues.apache.org/jira/browse/NIFI-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738774#comment-17738774 ] ASF subversion and git services commented on NIFI-11714: Commit 5a68069b8fff1f4bca80fe77bda5b5562e9e5721 in nifi's branch refs/heads/support/nifi-1.x from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=5a68069b8f ] NIFI-11714 Added Error Handler to Jetty Server - Configured Error Handler with Stack Traces disabled for NiFi and Registry Signed-off-by: Joe Gresock This closes #7447. > Add General Error Handling for Jetty Framework Server > - > > Key: NIFI-11714 > URL: https://issues.apache.org/jira/browse/NIFI-11714 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 1.latest, 2.latest > > Time Spent: 20m > Remaining Estimate: 0h > > The Jetty server that supports framework operations should be updated to > include generalized error handling that avoids writing stack traces. > Application REST resources support error handling and simplified messages, > but Jetty handles some exceptions that can result from malformed HTTP > requests. Implementing fallback error handling for Jetty will avoid providing > stack trace information and other information to HTTP clients. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11714) Add General Error Handling for Jetty Framework Server
[ https://issues.apache.org/jira/browse/NIFI-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738773#comment-17738773 ] ASF subversion and git services commented on NIFI-11714: Commit 50b01ffd6385516a6b26e2d8937e0c1820c49e2c in nifi's branch refs/heads/main from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=50b01ffd63 ] NIFI-11714 Added Error Handler to Jetty Server - Configured Error Handler with Stack Traces disabled for NiFi and Registry Signed-off-by: Joe Gresock This closes #7447. > Add General Error Handling for Jetty Framework Server > - > > Key: NIFI-11714 > URL: https://issues.apache.org/jira/browse/NIFI-11714 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 1.latest, 2.latest > > Time Spent: 10m > Remaining Estimate: 0h > > The Jetty server that supports framework operations should be updated to > include generalized error handling that avoids writing stack traces. > Application REST resources support error handling and simplified messages, > but Jetty handles some exceptions that can result from malformed HTTP > requests. Implementing fallback error handling for Jetty will avoid providing > stack trace information and other information to HTTP clients. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] asfgit closed pull request #7447: NIFI-11714 Add Error Handler to Jetty Server
asfgit closed pull request #7447: NIFI-11714 Add Error Handler to Jetty Server URL: https://github.com/apache/nifi/pull/7447 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (NIFI-11519) Sensitive Dynamic Properties do not work with Sensitive Parameter Values in DBCP
[ https://issues.apache.org/jira/browse/NIFI-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Emilio Setiadarma reassigned NIFI-11519: Assignee: Emilio Setiadarma > Sensitive Dynamic Properties do not work with Sensitive Parameter Values in > DBCP > > > Key: NIFI-11519 > URL: https://issues.apache.org/jira/browse/NIFI-11519 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.21.0 >Reporter: Andrew M. Lim >Assignee: Emilio Setiadarma >Priority: Major > > When configuring a DBCPConnectionPool controller service, I created a dynamic > property {{PWD}} and selected "Sensitive". The value from this dynamic > property should be used in the PWD connection property in the Database > Connection URL, but it causes an error in the processor that uses the > controller service (see below). > WORKAROUND: If I create a dynamic property {{SENSITIVE.PWD}} (doesn't matter > if I select "Sensitive" setting), then the error does not occur. > Error (from ExecuteSQL processor using the DBCP Connection Pool): > 16:04:12 UTCERRORef0a84d5-26f6-377a-af2c-bd9ddb098ea0 > All NodesExecuteSQL[id=ef0a84d5-26f6-377a-af2c-bd9ddb098ea0] Unable to > execute SQL select query [CREATE EXTERNAL TABLE orders25 LIKE PARQUET > 's3a://X/destination/parquet/X.parquet' > STORED AS PARQUET > LOCATION 's3a://X/destination/parquet/';] for > FlowFile[filename=X.parquet] routing to failure: > org.apache.nifi.processor.exception.ProcessException: Privileged action > failed due to: Cannot create PoolableConnectionFactory ( [JDBC](10100) > Connection Refused: [JDBC](11640) Required Connection Key(s): PWD; > [JDBC](11480) Optional Connection Key(s): AllowSelfSignedCerts, > AsyncExecPollInterval, AutomaticColumnRename, CAIssuedCertNamesMismatch, > CatalogSchemaSwitch, DefaultStringColumnLength, DelegationToken, > DelegationUID, DnsResolver, DnsResolverArg, FastConnection, krbJAASFile, > LowerCaseResultSetColumnName, NonSSPs, OptimizedInsert, > PreparedMetaLimitZero, RowsFetchedPerBlock, ServerVersion, > ServiceDiscoveryMode, SocketFactory, SocketFactoryArg, SocketTimeOut, > SSLKeyStore, SSLKeyStorePwd, SSLTrustStore, SSLTrustStorePwd, > StripCatalogName, SupportTimeOnlyTimestamp, UseCustomTypeCoercionMap, > UseNativeQuery, UseSasl) > - Caused by: java.sql.SQLException: Cannot create PoolableConnectionFactory ( > [JDBC](10100) Connection Refused: [JDBC](11640) Required Connection Key(s): > PWD; [JDBC](11480) Optional Connection Key(s): AllowSelfSignedCerts, > AsyncExecPollInterval, AutomaticColumnRename, CAIssuedCertNamesMismatch, > CatalogSchemaSwitch, DefaultStringColumnLength, DelegationToken, > DelegationUID, DnsResolver, DnsResolverArg, FastConnection, krbJAASFile, > LowerCaseResultSetColumnName, NonSSPs, OptimizedInsert, > PreparedMetaLimitZero, RowsFetchedPerBlock, ServerVersion, > ServiceDiscoveryMode, SocketFactory, SocketFactoryArg, SocketTimeOut, > SSLKeyStore, SSLKeyStorePwd, SSLTrustStore, SSLTrustStorePwd, > StripCatalogName, SupportTimeOnlyTimestamp, UseCustomTypeCoercionMap, > UseNativeQuery, UseSasl) > - Caused by: java.sql.SQLNonTransientConnectionException: [JDBC](10100) > Connection Refused: [JDBC](11640) Required Connection Key(s): PWD; > [JDBC](11480) Optional Connection Key(s): AllowSelfSignedCerts, > AsyncExecPollInterval, AutomaticColumnRename, CAIssuedCertNamesMismatch, > CatalogSchemaSwitch, DefaultStringColumnLength, DelegationToken, > DelegationUID, DnsResolver, DnsResolverArg, FastConnection, krbJAASFile, > LowerCaseResultSetColumnName, NonSSPs, OptimizedInsert, > PreparedMetaLimitZero, RowsFetchedPerBlock, ServerVersion, > ServiceDiscoveryMode, SocketFactory, SocketFactoryArg, SocketTimeOut, > SSLKeyStore, SSLKeyStorePwd, SSLTrustStore, SSLTrustStorePwd, > StripCatalogName, SupportTimeOnlyTimestamp, UseCustomTypeCoercionMap, > UseNativeQuery, UseSasl -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-11714) Add General Error Handling for Jetty Framework Server
[ https://issues.apache.org/jira/browse/NIFI-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-11714: Fix Version/s: 1.latest 2.latest > Add General Error Handling for Jetty Framework Server > - > > Key: NIFI-11714 > URL: https://issues.apache.org/jira/browse/NIFI-11714 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 1.latest, 2.latest > > Time Spent: 10m > Remaining Estimate: 0h > > The Jetty server that supports framework operations should be updated to > include generalized error handling that avoids writing stack traces. > Application REST resources support error handling and simplified messages, > but Jetty handles some exceptions that can result from malformed HTTP > requests. Implementing fallback error handling for Jetty will avoid providing > stack trace information and other information to HTTP clients. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-11714) Add General Error Handling for Jetty Framework Server
[ https://issues.apache.org/jira/browse/NIFI-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-11714: Affects Version/s: (was: 1.latest) (was: 2.latest) > Add General Error Handling for Jetty Framework Server > - > > Key: NIFI-11714 > URL: https://issues.apache.org/jira/browse/NIFI-11714 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > The Jetty server that supports framework operations should be updated to > include generalized error handling that avoids writing stack traces. > Application REST resources support error handling and simplified messages, > but Jetty handles some exceptions that can result from malformed HTTP > requests. Implementing fallback error handling for Jetty will avoid providing > stack trace information and other information to HTTP clients. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] krisztina-zsihovszki commented on a diff in pull request #7449: NIFI-11334: PutIceberg processor instance interference due same class loader usage
krisztina-zsihovszki commented on code in PR #7449: URL: https://github.com/apache/nifi/pull/7449#discussion_r1246974701 ## nifi-nar-bundles/nifi-iceberg-bundle/nifi-iceberg-processors/src/main/java/org/apache/nifi/processors/iceberg/catalog/IcebergCatalogFactory.java: ## @@ -0,0 +1,82 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.iceberg.catalog; + +import org.apache.hadoop.conf.Configuration; +import org.apache.iceberg.CatalogProperties; +import org.apache.iceberg.catalog.Catalog; +import org.apache.iceberg.hadoop.HadoopCatalog; +import org.apache.iceberg.hive.HiveCatalog; +import org.apache.nifi.services.iceberg.IcebergCatalogProperties; +import org.apache.nifi.services.iceberg.IcebergCatalogService; + +import java.util.HashMap; +import java.util.Map; + +import static org.apache.nifi.processors.iceberg.IcebergUtils.getConfigurationFromFiles; + +public class IcebergCatalogFactory { + +private final IcebergCatalogService catalogService; + +public IcebergCatalogFactory(IcebergCatalogService catalogService) { +this.catalogService = catalogService; +} + +public Catalog create() { +return switch (catalogService.getCatalogServiceType()) { Review Comment: Using arrow operator is not supported in Java 8, since this processor is also targeted for 1.x, please use Java 8 compatible syntax. ## nifi-nar-bundles/nifi-iceberg-bundle/nifi-iceberg-services/src/main/java/org/apache/nifi/services/iceberg/HiveCatalogService.java: ## @@ -69,28 +65,43 @@ protected List getSupportedPropertyDescriptors() { return PROPERTIES; } -private HiveCatalog catalog; - @Override protected Collection customValidate(ValidationContext validationContext) { final List problems = new ArrayList<>(); -String configMetastoreUri = null; -String configWarehouseLocation = null; +boolean configMetastoreUriPresent = false; +boolean configWarehouseLocationPresent = false; Review Comment: Not related to the actual change just noticed now that the expression language support related config of the properties can be enchanced. In my view both METASTORE_URI and WAREHOUSE_LOCATION can use .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) since flow files are not used when they are evaluated. (The same comment applies for WAREHOUSE_PATH in HadoopCatalogService.) ## nifi-nar-bundles/nifi-iceberg-bundle/nifi-iceberg-services/src/main/java/org/apache/nifi/services/iceberg/AbstractCatalogService.java: ## @@ -44,24 +55,30 @@ public abstract class AbstractCatalogService extends AbstractControllerService i .dynamicallyModifiesClasspath(true) .build(); -/** - * Loads configuration files from the provided paths. - * - * @param configFiles list of config file paths separated with comma - * @return merged configuration - */ -protected Configuration getConfigurationFromFiles(String configFiles) { -final Configuration conf = new Configuration(); -if (StringUtils.isNotBlank(configFiles)) { +protected List parseConfigFile(String configFiles) { +List documentList = new ArrayList<>(); +if (configFiles != null && !configFiles.trim().isEmpty()) { for (final String configFile : configFiles.split(",")) { -conf.addResource(new Path(configFile.trim())); +File file = new File(configFile.trim()); +try (final InputStream fis = new FileInputStream(file); + final InputStream in = new BufferedInputStream(fis)) { +final StandardDocumentProvider documentProvider = new StandardDocumentProvider(); +documentList.add(documentProvider.parse(in)); +} catch (IOException e) { +throw new RuntimeException(e); Review Comment: I'd rather use ProcessException and add some error message as well. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to
[jira] [Commented] (NIFI-11765) Upgrade to apache parent version 30
[ https://issues.apache.org/jira/browse/NIFI-11765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738736#comment-17738736 ] ASF subversion and git services commented on NIFI-11765: Commit 7c329bd2a86ba7744e3d6ced19d4c6c39d1e2295 in nifi's branch refs/heads/main from Bryan Bende [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=7c329bd2a8 ] NIFI-11765 Upgrade to apache parent version 30 Signed-off-by: Matt Burgess This closes #7450 > Upgrade to apache parent version 30 > --- > > Key: NIFI-11765 > URL: https://issues.apache.org/jira/browse/NIFI-11765 > Project: Apache NiFi > Issue Type: Task >Affects Versions: 1.22.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > Fix For: 1.latest, 2.latest > > Time Spent: 0.5h > Remaining Estimate: 0h > > There appears to be some weird compatibility issue with apache parent 29 and > Maven 3.8x and 3.9.x. The scenario that produces the problem is running a > build with a system property to override a version, say > "-Dhadoop.version=..." and then some module that does not even reference > hadoop.version, but does have hadoop dependencies, line ranger stuff which > uses its own hadoop.version, ends up trying to resolve the version from > hadoop.version. It happens specifically during the process-remote-resources > phase: > {code:java} > [INFO] --- remote-resources:1.7.0:process (process-resource-bundles) @ > nifi-ranger-plugin --- > [INFO] Preparing remote bundle org.apache:apache-jar-resource-bundle:1.4 > {code} > There seems to be some significant changes to this apache-jar-resource-bundle > between 1.4 and 1.5, and apache parent 30 goes to 1.5. > https://repo1.maven.org/maven2/org/apache/apache/29/apache-29.pom > https://repo1.maven.org/maven2/org/apache/apache/30/apache-30.pom -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-11765) Upgrade to apache parent version 30
[ https://issues.apache.org/jira/browse/NIFI-11765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-11765: Fix Version/s: 2.0.0 1.23.0 (was: 1.latest) (was: 2.latest) Resolution: Fixed Status: Resolved (was: Patch Available) > Upgrade to apache parent version 30 > --- > > Key: NIFI-11765 > URL: https://issues.apache.org/jira/browse/NIFI-11765 > Project: Apache NiFi > Issue Type: Task >Affects Versions: 1.22.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > Fix For: 2.0.0, 1.23.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > There appears to be some weird compatibility issue with apache parent 29 and > Maven 3.8x and 3.9.x. The scenario that produces the problem is running a > build with a system property to override a version, say > "-Dhadoop.version=..." and then some module that does not even reference > hadoop.version, but does have hadoop dependencies, line ranger stuff which > uses its own hadoop.version, ends up trying to resolve the version from > hadoop.version. It happens specifically during the process-remote-resources > phase: > {code:java} > [INFO] --- remote-resources:1.7.0:process (process-resource-bundles) @ > nifi-ranger-plugin --- > [INFO] Preparing remote bundle org.apache:apache-jar-resource-bundle:1.4 > {code} > There seems to be some significant changes to this apache-jar-resource-bundle > between 1.4 and 1.5, and apache parent 30 goes to 1.5. > https://repo1.maven.org/maven2/org/apache/apache/29/apache-29.pom > https://repo1.maven.org/maven2/org/apache/apache/30/apache-30.pom -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] mattyb149 closed pull request #7450: NIFI-11765 Upgrade to apache parent version 30
mattyb149 closed pull request #7450: NIFI-11765 Upgrade to apache parent version 30 URL: https://github.com/apache/nifi/pull/7450 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-11765) Upgrade to apache parent version 30
[ https://issues.apache.org/jira/browse/NIFI-11765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738735#comment-17738735 ] ASF subversion and git services commented on NIFI-11765: Commit 09be97e310c09ef48fe0b77faba8da4ceec27f45 in nifi's branch refs/heads/support/nifi-1.x from Bryan Bende [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=09be97e310 ] NIFI-11765 Upgrade to apache parent version 30 Signed-off-by: Matt Burgess > Upgrade to apache parent version 30 > --- > > Key: NIFI-11765 > URL: https://issues.apache.org/jira/browse/NIFI-11765 > Project: Apache NiFi > Issue Type: Task >Affects Versions: 1.22.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > Fix For: 1.latest, 2.latest > > Time Spent: 10m > Remaining Estimate: 0h > > There appears to be some weird compatibility issue with apache parent 29 and > Maven 3.8x and 3.9.x. The scenario that produces the problem is running a > build with a system property to override a version, say > "-Dhadoop.version=..." and then some module that does not even reference > hadoop.version, but does have hadoop dependencies, line ranger stuff which > uses its own hadoop.version, ends up trying to resolve the version from > hadoop.version. It happens specifically during the process-remote-resources > phase: > {code:java} > [INFO] --- remote-resources:1.7.0:process (process-resource-bundles) @ > nifi-ranger-plugin --- > [INFO] Preparing remote bundle org.apache:apache-jar-resource-bundle:1.4 > {code} > There seems to be some significant changes to this apache-jar-resource-bundle > between 1.4 and 1.5, and apache parent 30 goes to 1.5. > https://repo1.maven.org/maven2/org/apache/apache/29/apache-29.pom > https://repo1.maven.org/maven2/org/apache/apache/30/apache-30.pom -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] mattyb149 commented on pull request #7450: NIFI-11765 Upgrade to apache parent version 30
mattyb149 commented on PR #7450: URL: https://github.com/apache/nifi/pull/7450#issuecomment-1613629986 +1 LGTM, thanks for the upgrade! Merging to support/nifi-1.x and main -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] turcsanyip commented on a diff in pull request #7449: NIFI-11334: PutIceberg processor instance interference due same class loader usage
turcsanyip commented on code in PR #7449: URL: https://github.com/apache/nifi/pull/7449#discussion_r1246952906 ## nifi-nar-bundles/nifi-iceberg-bundle/nifi-iceberg-processors/src/main/java/org/apache/nifi/processors/iceberg/AbstractIcebergProcessor.java: ## @@ -117,6 +122,15 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro } } +@Override +public String getClassloaderIsolationKey(PropertyContext context) { +final KerberosUserService kerberosUserService = context.getProperty(KERBEROS_USER_SERVICE).asControllerService(KerberosUserService.class); +if (kerberosUserService != null) { +return kerberosUserService.getIdentifier(); Review Comment: I would suggest using the same classloader isolation key as the one in the other hadoop related components: kerberos principal. ``` final KerberosUser kerberosUser = kerberosUserService.createKerberosUser(); return kerberosUser.getPrincipal(); ``` The controller service identifier also works but it may be too restrictive. It creates separate classloaders for controller services having the same principal but they could share the classloader. E.g. the user can create a process group with the kerberos service and the iceberg processor in it and then copy it multiple times. Not the best design because the kerberos service should be extracted in the parent process group in this case but I can imagine it happening. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] turcsanyip commented on a diff in pull request #7449: NIFI-11334: PutIceberg processor instance interference due same class loader usage
turcsanyip commented on code in PR #7449: URL: https://github.com/apache/nifi/pull/7449#discussion_r1246952906 ## nifi-nar-bundles/nifi-iceberg-bundle/nifi-iceberg-processors/src/main/java/org/apache/nifi/processors/iceberg/AbstractIcebergProcessor.java: ## @@ -117,6 +122,15 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro } } +@Override +public String getClassloaderIsolationKey(PropertyContext context) { +final KerberosUserService kerberosUserService = context.getProperty(KERBEROS_USER_SERVICE).asControllerService(KerberosUserService.class); +if (kerberosUserService != null) { +return kerberosUserService.getIdentifier(); Review Comment: I would suggest using the same classloader isolation key as the one in the other hadoop related components: kerberos principal. ``` final KerberosUser kerberosUser = kerberosUserService.createKerberosUser(); return kerberosUser.getPrincipal(); ``` The controller service identifier also works but it may be too restrictive. It creates separate classloaders for controller service having the same principal but they could share the classloader. The user can create a process group with the kerberos service and the iceberg processor in it and then copy it multiple times. Not the best design because the kerberos service should be extracted in the parent process group in this case but I can imagine it happening. ## nifi-nar-bundles/nifi-iceberg-bundle/nifi-iceberg-services-api/src/main/java/org/apache/nifi/services/iceberg/IcebergCatalogService.java: ## @@ -17,16 +17,18 @@ */ package org.apache.nifi.services.iceberg; -import org.apache.hadoop.conf.Configuration; -import org.apache.iceberg.catalog.Catalog; import org.apache.nifi.controller.ControllerService; +import java.util.Map; + /** * Provides a basic connector to Iceberg catalog services. */ public interface IcebergCatalogService extends ControllerService { -Catalog getCatalog(); +IcebergCatalogServiceType getCatalogServiceType(); + +Map getAdditionalParameters(); Review Comment: "additionalParameters" are quite generic and the callers also use "properties" for the return value. `getCatalogProperties()` would be more descriptive. Also, the return type would be `Map` (see the comment about `IcebergCatalogProperty` enum above). ## nifi-nar-bundles/nifi-iceberg-bundle/nifi-iceberg-services-api/src/main/java/org/apache/nifi/services/iceberg/IcebergCatalogService.java: ## @@ -17,16 +17,18 @@ */ package org.apache.nifi.services.iceberg; -import org.apache.hadoop.conf.Configuration; -import org.apache.iceberg.catalog.Catalog; import org.apache.nifi.controller.ControllerService; +import java.util.Map; + /** * Provides a basic connector to Iceberg catalog services. */ public interface IcebergCatalogService extends ControllerService { -Catalog getCatalog(); +IcebergCatalogServiceType getCatalogServiceType(); + +Map getAdditionalParameters(); -Configuration getConfiguration(); +String getConfigFiles(); Review Comment: It returns a comma separated list of file paths which is not obvious at first. The method could parse the raw property value and return the paths in a list: ``` List getConfigFilePaths(); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1584: MINIFICPP-1755 - Use std::span instead of gsl::span
szaszm commented on code in PR #1584: URL: https://github.com/apache/nifi-minifi-cpp/pull/1584#discussion_r1246939583 ## libminifi/src/utils/LineByLineInputOutputStreamCallback.cpp: ## @@ -67,7 +68,7 @@ void LineByLineInputOutputStreamCallback::readLine() { if (end_of_line != input_.end()) { ++end_of_line; } current_line_ = next_line_; - next_line_ = utils::span_to(gsl::make_span(&*current_pos_, &*end_of_line).as_span()); + next_line_ = utils::span_to(utils::as_span(std::span(std::to_address(current_pos_), std::to_address(end_of_line; Review Comment: Is `std::to_address` guaranteed to work with vector iterators? I'm not sure iterators are considered "fancy pointers". -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1577: MINIFICPP-2020 - Protect MINIFI_HOME from mutual access
szaszm commented on code in PR #1577: URL: https://github.com/apache/nifi-minifi-cpp/pull/1577#discussion_r1246929741 ## libminifi/src/utils/FileMutex.cpp: ## @@ -0,0 +1,171 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "utils/FileMutex.h" + +#include +#include +#include "utils/gsl.h" +#include "utils/OsUtils.h" +#include "utils/Error.h" + +#ifdef WIN32 + +namespace org::apache::nifi::minifi::utils { + +FileMutex::FileMutex(std::filesystem::path path): path_(std::move(path)) {} + +// we cannot assume the logging system to be initialized Review Comment: As long as it's only used in one place, it's fine like this. If it's ever needed somewhere, where logging is available, you can pass in a log callback to make logging polymorphic and independent of the logging facilities. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1540: MINIFICPP-2082 Move RocksDB stats to RepositoryMetrics
szaszm commented on code in PR #1540: URL: https://github.com/apache/nifi-minifi-cpp/pull/1540#discussion_r1246924730 ## libminifi/src/core/state/nodes/RepositoryMetricsSourceStore.cpp: ## @@ -0,0 +1,103 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#include "core/state/nodes/RepositoryMetricsSourceStore.h" + +namespace org::apache::nifi::minifi::state::response { + +RepositoryMetricsSourceStore::RepositoryMetricsSourceStore(std::string name) : name_(std::move(name)) {} + +void RepositoryMetricsSourceStore::setRepositories(const std::vector> ) { + repositories_ = repositories; +} + +void RepositoryMetricsSourceStore::addRepository(const std::shared_ptr ) { + if (nullptr != repo) { +repositories_.push_back(repo); + } +} + +std::vector RepositoryMetricsSourceStore::serialize() const { + std::vector serialized; + for (const auto& repo : repositories_) { +SerializedResponseNode parent; +parent.name = repo->getRepositoryName(); +SerializedResponseNode is_running; +is_running.name = "running"; +is_running.value = repo->isRunning(); + +SerializedResponseNode is_full; +is_full.name = "full"; +is_full.value = repo->isFull(); + +SerializedResponseNode repo_size; +repo_size.name = "size"; +repo_size.value = repo->getRepositorySize(); + +SerializedResponseNode max_repo_size; +max_repo_size.name = "maxSize"; +max_repo_size.value = repo->getMaxRepositorySize(); + Review Comment: You can use designated initializers to describe the same data structure in a simpler way. Example from AgentInformation.h: ```cpp std::vector serialized = { {.name = "identifier", .value = AgentBuild::BUILD_IDENTIFIER}, {.name = "agentType", .value = "cpp"}, {.name = "buildInfo", .children = { {.name = "flags", .value = AgentBuild::COMPILER_FLAGS}, {.name = "compiler", .value = AgentBuild::COMPILER}, {.name = "version", .value = AgentBuild::VERSION}, {.name = "revision", .value = AgentBuild::BUILD_REV}, {.name = "timestamp", .value = static_cast(std::stoull(AgentBuild::BUILD_DATE))} }} }; ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-11197) yaml and json conversion processor
[ https://issues.apache.org/jira/browse/NIFI-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738698#comment-17738698 ] Daniel Stieglitz commented on NIFI-11197: - [~exceptionfactory] I assume what is being asked here is for a Yaml record reader similar to what NIFI already has for XML (XMLReader) and JSON (JsonPathReader and JsonTreeReader). I also assumed there would have to be accompanying Yaml record writer. Does NIFI want to support a Yaml record reader(s) and writer? > yaml and json conversion processor > -- > > Key: NIFI-11197 > URL: https://issues.apache.org/jira/browse/NIFI-11197 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.20.0 >Reporter: macdoor615 >Priority: Major > > The yaml format is basically equivalent to json. When used as a configuration > file, it is much more convenient than json. It can have comments and the file > is shorter. > More and more systems adopt yaml format. Now we developed a conversion tool > from yaml to json with the ExecuteGroovyScript processor. > It is recommended to add a processor that can convert between yaml and json -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1592: MINIFICPP-2131 Refactored GetTCP
fgerlits commented on code in PR #1592: URL: https://github.com/apache/nifi-minifi-cpp/pull/1592#discussion_r1243980137 ## extensions/standard-processors/processors/TailFile.cpp: ## @@ -343,10 +331,11 @@ void TailFile::onSchedule(const std::shared_ptr , throw Exception(PROCESSOR_EXCEPTION, "Failed to get StateManager"); } - std::string value; - - if (context->getProperty(Delimiter.getName(), value)) { -delimiter_ = parseDelimiter(value); + if (auto delimiter_str = context->getProperty(Delimiter)) { +auto parsed_delimiter = utils::StringUtils::parseCharacter(*delimiter_str); +if (!parsed_delimiter) + throw Exception(PROCESS_SCHEDULE_EXCEPTION, fmt::format("Invalid delimiter: {} (it must be a single character, whether escaped or not)", *delimiter_str)); +delimiter_ = *parsed_delimiter; Review Comment: This will invalidate some TailFile configs which were previously accepted (but probably shouldn't have been): if `Delimiter` contains more than one character, previously we used the first character, now we'll throw. I'm not sure if this is a problem, but it's something we should discuss. ## extensions/standard-processors/processors/GetTCP.cpp: ## @@ -17,275 +17,287 @@ */ #include "GetTCP.h" -#ifndef WIN32 -#include -#endif #include -#include #include -#include #include -#include -#include #include -#include "io/ClientSocket.h" +#include +#include +#include "utils/net/AsioCoro.h" #include "io/StreamFactory.h" #include "utils/gsl.h" #include "utils/StringUtils.h" -#include "utils/TimeUtil.h" #include "core/ProcessContext.h" #include "core/ProcessSession.h" #include "core/ProcessSessionFactory.h" #include "core/PropertyBuilder.h" #include "core/Resource.h" -namespace org::apache::nifi::minifi::processors { +using namespace std::literals::chrono_literals; -const char *DataHandler::SOURCE_ENDPOINT_ATTRIBUTE = "source.endpoint"; +namespace org::apache::nifi::minifi::processors { const core::Property GetTCP::EndpointList( -core::PropertyBuilder::createProperty("endpoint-list")->withDescription("A comma delimited list of the endpoints to connect to. The format should be :.")->isRequired(true) -->build()); - -const core::Property GetTCP::ConcurrentHandlers( - core::PropertyBuilder::createProperty("concurrent-handler-count")->withDescription("Number of concurrent handlers for this session")->withDefaultValue(1)->build()); +core::PropertyBuilder::createProperty("Endpoint List") + ->withDescription("A comma delimited list of the endpoints to connect to. The format should be :.") + ->isRequired(true)->build()); -const core::Property GetTCP::ReconnectInterval( - core::PropertyBuilder::createProperty("reconnect-interval")->withDescription("The number of seconds to wait before attempting to reconnect to the endpoint.") -->withDefaultValue("5 s")->build()); - -const core::Property GetTCP::ReceiveBufferSize( - core::PropertyBuilder::createProperty("receive-buffer-size")->withDescription("The size of the buffer to receive data in. Default 16384 (16MB).")->withDefaultValue("16 MB") +const core::Property GetTCP::SSLContextService( +core::PropertyBuilder::createProperty("SSL Context Service") + ->withDescription("SSL Context Service Name") + ->asType()->build()); + +const core::Property GetTCP::MessageDelimiter( +core::PropertyBuilder::createProperty("Message Delimiter")->withDescription( +"Character that denotes the end of the message.") +->withDefaultValue("\\n")->build()); + +const core::Property GetTCP::MaxQueueSize( +core::PropertyBuilder::createProperty("Max Size of Message Queue") +->withDescription("Maximum number of messages allowed to be buffered before processing them when the processor is triggered. " + "If the buffer is full, the message is ignored. If set to zero the buffer is unlimited.") +->withDefaultValue(1) +->isRequired(true) ->build()); -const core::Property GetTCP::SSLContextService( -core::PropertyBuilder::createProperty("SSL Context Service")->withDescription("SSL Context Service Name")->asType()->build()); +const core::Property GetTCP::MaxBatchSize( +core::PropertyBuilder::createProperty("Max Batch Size") +->withDescription("The maximum number of messages to process at a time.") +->withDefaultValue(500) +->isRequired(true) +->build()); -const core::Property GetTCP::StayConnected( -core::PropertyBuilder::createProperty("Stay Connected")->withDescription("Determines if we keep the same socket despite having no data")->withDefaultValue(true)->build()); +const core::Property GetTCP::MaxMessageSize( +core::PropertyBuilder::createProperty("Maximum Message Size") + ->withDescription("Optional size of the buffer to receive data in.")->build()); -const core::Property
[jira] [Created] (MINIFICPP-2156) PutTCP tries to read passphrase from stdin
Martin Zink created MINIFICPP-2156: -- Summary: PutTCP tries to read passphrase from stdin Key: MINIFICPP-2156 URL: https://issues.apache.org/jira/browse/MINIFICPP-2156 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Reporter: Martin Zink Assignee: Martin Zink Fix For: 0.15.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-11761) Timed out minifi restart and kill doesn't work
[ https://issues.apache.org/jira/browse/NIFI-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Csaba Bejan updated NIFI-11761: --- Fix Version/s: 2.0.0 1.23.0 > Timed out minifi restart and kill doesn't work > -- > > Key: NIFI-11761 > URL: https://issues.apache.org/jira/browse/NIFI-11761 > Project: Apache NiFi > Issue Type: Bug > Components: MiNiFi >Reporter: Ferenc Kis >Assignee: Ferenc Kis >Priority: Major > Fix For: 2.0.0, 1.23.0 > > Time Spent: 1h > Remaining Estimate: 0h > > It looks like when graceful shutdown period expires and minifi is killed > during restart then the start is not happening. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-11761) Timed out minifi restart and kill doesn't work
[ https://issues.apache.org/jira/browse/NIFI-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Csaba Bejan resolved NIFI-11761. Resolution: Fixed > Timed out minifi restart and kill doesn't work > -- > > Key: NIFI-11761 > URL: https://issues.apache.org/jira/browse/NIFI-11761 > Project: Apache NiFi > Issue Type: Bug > Components: MiNiFi >Reporter: Ferenc Kis >Assignee: Ferenc Kis >Priority: Major > Fix For: 2.0.0, 1.23.0 > > Time Spent: 1h > Remaining Estimate: 0h > > It looks like when graceful shutdown period expires and minifi is killed > during restart then the start is not happening. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11761) Timed out minifi restart and kill doesn't work
[ https://issues.apache.org/jira/browse/NIFI-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738615#comment-17738615 ] ASF subversion and git services commented on NIFI-11761: Commit 543cf4d799cb7a9804d60aaf9bb125bd32f5a79c in nifi's branch refs/heads/support/nifi-1.x from Ferenc Kis [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=543cf4d799 ] NIFI-11761 Fixed MiNiFi restart issue when graceful shutdown period expires. MiNiFi restart sends bootstrap to background Signed-off-by: Csaba Bejan This closes #7448. > Timed out minifi restart and kill doesn't work > -- > > Key: NIFI-11761 > URL: https://issues.apache.org/jira/browse/NIFI-11761 > Project: Apache NiFi > Issue Type: Bug > Components: MiNiFi >Reporter: Ferenc Kis >Assignee: Ferenc Kis >Priority: Major > Time Spent: 1h > Remaining Estimate: 0h > > It looks like when graceful shutdown period expires and minifi is killed > during restart then the start is not happening. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11761) Timed out minifi restart and kill doesn't work
[ https://issues.apache.org/jira/browse/NIFI-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738614#comment-17738614 ] ASF subversion and git services commented on NIFI-11761: Commit 3c3cf9976ec18a9b67dc5a435a01e4d48be12151 in nifi's branch refs/heads/main from Ferenc Kis [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=3c3cf9976e ] NIFI-11761 Fixed MiNiFi restart issue when graceful shutdown period expires. MiNiFi restart sends bootstrap to background Signed-off-by: Csaba Bejan This closes #7448. > Timed out minifi restart and kill doesn't work > -- > > Key: NIFI-11761 > URL: https://issues.apache.org/jira/browse/NIFI-11761 > Project: Apache NiFi > Issue Type: Bug > Components: MiNiFi >Reporter: Ferenc Kis >Assignee: Ferenc Kis >Priority: Major > Time Spent: 50m > Remaining Estimate: 0h > > It looks like when graceful shutdown period expires and minifi is killed > during restart then the start is not happening. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] bejancsaba closed pull request #7448: NIFI-11761 Fixed MiNiFi restart issue when graceful shutdown period expires. MiNiFi restart sends bootstrap to background
bejancsaba closed pull request #7448: NIFI-11761 Fixed MiNiFi restart issue when graceful shutdown period expires. MiNiFi restart sends bootstrap to background URL: https://github.com/apache/nifi/pull/7448 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-2155) Imporve StreamCallbacks error handling
Martin Zink created MINIFICPP-2155: -- Summary: Imporve StreamCallbacks error handling Key: MINIFICPP-2155 URL: https://issues.apache.org/jira/browse/MINIFICPP-2155 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Martin Zink Currently io::InputStreamCallback and io::OutputStreamCallback use int64_t to return the results where the static_cast(-1) and static_cast(-2) signals something went wrong, and anything other than those means the number of written/read bytes. It would be better to use nonstd::expected so we can further propage the reason behind the errors so we dont need logging inside these callbacks. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1587: MINIFICPP-2135 Add SSL support for Prometheus reporter
szaszm commented on code in PR #1587: URL: https://github.com/apache/nifi-minifi-cpp/pull/1587#discussion_r1246671763 ## docker/test/integration/features/MiNiFi_integration_test_driver.py: ## @@ -53,9 +53,9 @@ def __init__(self, context, feature_id: str): self.cluster.set_directory_bindings(self.docker_directory_bindings.get_directory_bindings(self.feature_id), self.docker_directory_bindings.get_data_directories(self.feature_id)) self.root_ca_cert, self.root_ca_key = make_ca("root CA") -minifi_client_cert, minifi_client_key = make_client_cert(common_name=f"minifi-cpp-flow-{self.feature_id}", - ca_cert=self.root_ca_cert, - ca_key=self.root_ca_key) +minifi_client_cert, minifi_client_key = make_cert_without_extended_usage(common_name=f"minifi-cpp-flow-{self.feature_id}", Review Comment: I think it would be more intuitive to refer to this as making server cert/key pair, and the `make_client_cert` could remain for client certificate generation. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1587: MINIFICPP-2135 Add SSL support for Prometheus reporter
szaszm commented on code in PR #1587: URL: https://github.com/apache/nifi-minifi-cpp/pull/1587#discussion_r1246669182 ## docker/test/integration/cluster/containers/MinifiContainer.py: ## @@ -109,12 +110,16 @@ def _create_properties(self): if not self.options.enable_provenance: f.write("nifi.provenance.repository.class.name=NoOpRepository\n") -if self.options.enable_prometheus: +if self.options.enable_prometheus or self.options.enable_prometheus_with_ssl: f.write("nifi.metrics.publisher.agent.identifier=Agent1\n") f.write("nifi.metrics.publisher.class=PrometheusMetricsPublisher\n") f.write("nifi.metrics.publisher.PrometheusMetricsPublisher.port=9936\n") f.write("nifi.metrics.publisher.metrics=RepositoryMetrics,QueueMetrics,PutFileMetrics,processorMetrics/Get.*,FlowInformation,DeviceInfoNode,AgentStatus\n") +if self.options.enable_prometheus_with_ssl: + f.write("nifi.metrics.publisher.PrometheusMetricsPublisher.certificate=/tmp/resources/prometheus-ssl/minifi-cpp-flow.crt\n") + f.write("nifi.metrics.publisher.PrometheusMetricsPublisher.ca.certificate=/tmp/resources/prometheus-ssl/root-ca.pem\n") Review Comment: I suggested throwing out prometheus server side ssl in my other reply, just noting on this thread as well. If we do that, extra certs are no longer needed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1587: MINIFICPP-2135 Add SSL support for Prometheus reporter
szaszm commented on code in PR #1587: URL: https://github.com/apache/nifi-minifi-cpp/pull/1587#discussion_r124838 ## docker/test/integration/cluster/checkers/PrometheusChecker.py: ## @@ -18,7 +18,16 @@ class PrometheusChecker: def __init__(self): -self.prometheus_client = PrometheusConnect(url="http://localhost:9090;, disable_ssl=True) +self.use_ssl = False + +def enable_ssl(self): +self.use_ssl = True + +def _getClient(self): +if self.use_ssl: +return PrometheusConnect(url="https://localhost:9090;, disable_ssl=True) Review Comment: Do we even need the option of using SSL on the prometheus listener? It doesn't interact with minifi, since prometheus is the client in the metrics collection, and it doesn't need client certificates either. Why don't we just throw out the ssl option in PrometheusChecker altogether? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-11765) Upgrade to apache parent version 30
[ https://issues.apache.org/jira/browse/NIFI-11765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-11765: --- Status: Patch Available (was: Open) > Upgrade to apache parent version 30 > --- > > Key: NIFI-11765 > URL: https://issues.apache.org/jira/browse/NIFI-11765 > Project: Apache NiFi > Issue Type: Task >Affects Versions: 1.22.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > Fix For: 1.latest, 2.latest > > Time Spent: 10m > Remaining Estimate: 0h > > There appears to be some weird compatibility issue with apache parent 29 and > Maven 3.8x and 3.9.x. The scenario that produces the problem is running a > build with a system property to override a version, say > "-Dhadoop.version=..." and then some module that does not even reference > hadoop.version, but does have hadoop dependencies, line ranger stuff which > uses its own hadoop.version, ends up trying to resolve the version from > hadoop.version. It happens specifically during the process-remote-resources > phase: > {code:java} > [INFO] --- remote-resources:1.7.0:process (process-resource-bundles) @ > nifi-ranger-plugin --- > [INFO] Preparing remote bundle org.apache:apache-jar-resource-bundle:1.4 > {code} > There seems to be some significant changes to this apache-jar-resource-bundle > between 1.4 and 1.5, and apache parent 30 goes to 1.5. > https://repo1.maven.org/maven2/org/apache/apache/29/apache-29.pom > https://repo1.maven.org/maven2/org/apache/apache/30/apache-30.pom -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] bbende opened a new pull request, #7450: NIFI-11765 Upgrade to apache parent version 30
bbende opened a new pull request, #7450: URL: https://github.com/apache/nifi/pull/7450 # Summary [NIFI-11765](https://issues.apache.org/jira/browse/NIFI-11765) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [X] Build completed using `mvn clean install -P contrib-check` - [X] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-11765) Upgrade to apache parent version 30
Bryan Bende created NIFI-11765: -- Summary: Upgrade to apache parent version 30 Key: NIFI-11765 URL: https://issues.apache.org/jira/browse/NIFI-11765 Project: Apache NiFi Issue Type: Task Affects Versions: 1.22.0 Reporter: Bryan Bende Assignee: Bryan Bende Fix For: 1.latest, 2.latest There appears to be some weird compatibility issue with apache parent 29 and Maven 3.8x and 3.9.x. The scenario that produces the problem is running a build with a system property to override a version, say "-Dhadoop.version=..." and then some module that does not even reference hadoop.version, but does have hadoop dependencies, line ranger stuff which uses its own hadoop.version, ends up trying to resolve the version from hadoop.version. It happens specifically during the process-remote-resources phase: {code:java} [INFO] --- remote-resources:1.7.0:process (process-resource-bundles) @ nifi-ranger-plugin --- [INFO] Preparing remote bundle org.apache:apache-jar-resource-bundle:1.4 {code} There seems to be some significant changes to this apache-jar-resource-bundle between 1.4 and 1.5, and apache parent 30 goes to 1.5. https://repo1.maven.org/maven2/org/apache/apache/29/apache-29.pom https://repo1.maven.org/maven2/org/apache/apache/30/apache-30.pom -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] ferencerdei commented on pull request #7448: NIFI-11761 Fixed MiNiFi restart issue when graceful shutdown period expires. MiNiFi restart sends bootstrap to background
ferencerdei commented on PR #7448: URL: https://github.com/apache/nifi/pull/7448#issuecomment-1613216957 Thanks for the update +1 from my side -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-11759) Remove Distributed Cache Map Service Client from ListHDFS
[ https://issues.apache.org/jira/browse/NIFI-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-11759: Fix Version/s: 2.0.0 (was: 2.latest) > Remove Distributed Cache Map Service Client from ListHDFS > - > > Key: NIFI-11759 > URL: https://issues.apache.org/jira/browse/NIFI-11759 > Project: Apache NiFi > Issue Type: Task > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 2.0.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Continuing the work of NIFI-6023 which deprecated/ignored the use of the > Distributed Map Cache in ListHDFS which was replaced with State Management, > this Jira is to remove the property from the processor for the main (upcoming > 2.0) branch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] briansolo1985 commented on pull request #7448: NIFI-11761 Fixed MiNiFi restart issue when graceful shutdown period expires. MiNiFi restart sends bootstrap to background
briansolo1985 commented on PR #7448: URL: https://github.com/apache/nifi/pull/7448#issuecomment-1613154149 Thanks for your comments. I tried addressed all of them. I had to change the log message differently because with moving the related code to UnixProcessUtils the context has changed as well. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request, #1597: MINIFICPP-2153 - Change default merge algorithm
adamdebreceni opened a new pull request, #1597: URL: https://github.com/apache/nifi-minifi-cpp/pull/1597 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-2154) Replace SecureSocketGetTCPTest with utils::net::getSSLContext tests
Martin Zink created MINIFICPP-2154: -- Summary: Replace SecureSocketGetTCPTest with utils::net::getSSLContext tests Key: MINIFICPP-2154 URL: https://issues.apache.org/jira/browse/MINIFICPP-2154 Project: Apache NiFi MiNiFi C++ Issue Type: Test Reporter: Martin Zink Fix For: 0.15.0 SecureSocketGetTCPTest mainly test the various SSLContext service configurations, but in a way too convoluted way. After MINIFICPP-2131 we should just test the minifi::utils::net::getSSLContext part (that way it will test all the processors that use asio with ssl, not just GetTCP. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (MINIFICPP-2154) Replace SecureSocketGetTCPTest with utils::net::getSSLContext tests
[ https://issues.apache.org/jira/browse/MINIFICPP-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martin Zink reassigned MINIFICPP-2154: -- Assignee: Martin Zink > Replace SecureSocketGetTCPTest with utils::net::getSSLContext tests > --- > > Key: MINIFICPP-2154 > URL: https://issues.apache.org/jira/browse/MINIFICPP-2154 > Project: Apache NiFi MiNiFi C++ > Issue Type: Test >Reporter: Martin Zink >Assignee: Martin Zink >Priority: Minor > Fix For: 0.15.0 > > > SecureSocketGetTCPTest mainly test the various SSLContext service > configurations, but in a way too convoluted way. > After MINIFICPP-2131 we should just test the > minifi::utils::net::getSSLContext part (that way it will test all the > processors that use asio with ssl, not just GetTCP. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-11764) flowfile storage size mismatch
Giovanni created NIFI-11764: --- Summary: flowfile storage size mismatch Key: NIFI-11764 URL: https://issues.apache.org/jira/browse/NIFI-11764 Project: Apache NiFi Issue Type: Improvement Reporter: Giovanni Attachments: image-2023-06-29-10-24-00-879.png, image-2023-06-29-10-25-06-665.png, image-2023-06-29-10-27-17-702.png, image-2023-06-29-10-28-45-547.png, image-2023-06-29-10-30-07-266.png, image-2023-06-29-10-31-33-526.png Hi, Nifi is reporting a wrong value for the flowfile storage used space. If I check the dashboard it reports 528.05 MB on all nodes: !image-2023-06-29-10-24-00-879.png|width=574,height=206! The values are also confirmed by the nodes status history: !image-2023-06-29-10-25-06-665.png|width=584,height=431! However my monitoring tool reports 52KB only: !image-2023-06-29-10-27-17-702.png|width=997,height=165! These low values are confirmed on the hosts themselves: !image-2023-06-29-10-28-45-547.png|width=485,height=179! !image-2023-06-29-10-30-07-266.png|width=484,height=178! !image-2023-06-29-10-31-33-526.png|width=485,height=176! The flowfile repository settings are the same on all nodes: * nifi1 {code:java} # FlowFile Repository nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog nifi.flowfile.repository.directory=/var/nifi/flowfile_repo/data nifi.flowfile.repository.checkpoint.interval=20 secs nifi.flowfile.repository.always.sync=false nifi.flowfile.repository.retain.orphaned.flowfiles=truenifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager nifi.queue.swap.threshold=2{code} * nifi2 {code:java} # FlowFile Repository nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog nifi.flowfile.repository.directory=/var/nifi/flowfile_repo/data nifi.flowfile.repository.checkpoint.interval=20 secs nifi.flowfile.repository.always.sync=false nifi.flowfile.repository.retain.orphaned.flowfiles=truenifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager nifi.queue.swap.threshold=2 {code} * nifi3 {code:java} # FlowFile Repository nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog nifi.flowfile.repository.directory=/var/nifi/flowfile_repo/data nifi.flowfile.repository.checkpoint.interval=20 secs nifi.flowfile.repository.always.sync=false nifi.flowfile.repository.retain.orphaned.flowfiles=truenifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager nifi.queue.swap.threshold=2 {code} I also monitored the disk usage in realtime and it reports less than 100MB on each node for a very few seconds. However still far from the steady values reported in the GUI. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] bejancsaba commented on a diff in pull request #7448: NIFI-11761 Fixed MiNiFi restart issue when graceful shutdown period expires. MiNiFi restart sends bootstrap to background
bejancsaba commented on code in PR #7448: URL: https://github.com/apache/nifi/pull/7448#discussion_r1246295415 ## minifi/minifi-bootstrap/src/main/java/org/apache/nifi/minifi/bootstrap/command/StopRunner.java: ## @@ -112,6 +114,19 @@ private void gracefulShutDownMiNiFiProcess(long minifiPid) throws IOException { if (minifiPid != UNINITIALIZED) { processUtils.shutdownProcess(minifiPid, "MiNiFi has not finished shutting down after {} seconds. Killing process.", gracefulShutdownParameterProvider.getGracefulShutdownSeconds()); +int maxRetry = 5; +while (processUtils.isProcessRunning(minifiPid)) { +if (maxRetry == 0) { +throw new IOException("Failed to stop MiNiFi process. MiNiFi process is still running after graceful shutdown has completed"); Review Comment: We could explicitly say here "...after graceful shutdown completed and killing attempted afterwards" or something along those lines I think we can't get here only if graceful time expired kill was attempted but still the process didn't stop. Right? ## minifi/minifi-bootstrap/src/main/java/org/apache/nifi/minifi/bootstrap/command/StopRunner.java: ## @@ -112,6 +114,19 @@ private void gracefulShutDownMiNiFiProcess(long minifiPid) throws IOException { if (minifiPid != UNINITIALIZED) { processUtils.shutdownProcess(minifiPid, "MiNiFi has not finished shutting down after {} seconds. Killing process.", gracefulShutdownParameterProvider.getGracefulShutdownSeconds()); +int maxRetry = 5; +while (processUtils.isProcessRunning(minifiPid)) { +if (maxRetry == 0) { +throw new IOException("Failed to stop MiNiFi process. MiNiFi process is still running after graceful shutdown has completed"); +} +CMD_LOGGER.debug("MiNiFi process is still running after shutdown has completed"); Review Comment: I think this could be WARN as if we are here that is already not good ## minifi/minifi-bootstrap/src/main/java/org/apache/nifi/minifi/bootstrap/command/StopRunner.java: ## @@ -112,6 +114,19 @@ private void gracefulShutDownMiNiFiProcess(long minifiPid) throws IOException { if (minifiPid != UNINITIALIZED) { processUtils.shutdownProcess(minifiPid, "MiNiFi has not finished shutting down after {} seconds. Killing process.", gracefulShutdownParameterProvider.getGracefulShutdownSeconds()); +int maxRetry = 5; Review Comment: I'm not sure whether it would be useful or not to externalise at least this one (maybe the sleep time is not justified but it can be argued that this makes sense to be configurable) what do you think? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1583: MINIFICPP-1719 Replace LibreSSL with OpenSSL 3.1
szaszm commented on code in PR #1583: URL: https://github.com/apache/nifi-minifi-cpp/pull/1583#discussion_r1246256189 ## extensions/standard-processors/processors/HashContent.h: ## @@ -49,21 +48,25 @@ namespace { // NOLINT HashReturnType ret_val; ret_val.second = 0; std::array buffer{}; -MD5_CTX context; -MD5_Init(); +EVP_MD_CTX *context = EVP_MD_CTX_new(); +const auto guard = gsl::finally([]() { + EVP_MD_CTX_free(context); +}); +const EVP_MD *md = EVP_md5(); +EVP_DigestInit_ex(context, md, nullptr); Review Comment: Is there any reason not to inline the digest type call? ```suggestion EVP_DigestInit_ex(context, EVP_md5(), nullptr); ``` ## libminifi/src/core/state/Value.cpp: ## @@ -34,25 +34,29 @@ const std::type_index Value::BOOL_TYPE = std::type_index(typeid(bool)); const std::type_index Value::DOUBLE_TYPE = std::type_index(typeid(double)); const std::type_index Value::STRING_TYPE = std::type_index(typeid(std::string)); -void hashNode(const SerializedResponseNode& node, SHA512_CTX& ctx) { - SHA512_Update(, node.name.c_str(), node.name.length()); +void hashNode(const SerializedResponseNode& node, EVP_MD_CTX* ctx) { Review Comment: If ctx is expected to be valid/not-null, then keeping it as a reference would document this on the interface. ## extensions/standard-processors/processors/HashContent.h: ## @@ -49,21 +48,25 @@ namespace { // NOLINT HashReturnType ret_val; ret_val.second = 0; std::array buffer{}; -MD5_CTX context; -MD5_Init(); +EVP_MD_CTX *context = EVP_MD_CTX_new(); +const auto guard = gsl::finally([]() { + EVP_MD_CTX_free(context); +}); +const EVP_MD *md = EVP_md5(); +EVP_DigestInit_ex(context, md, nullptr); size_t ret = 0; do { ret = stream->read(buffer); if (ret > 0) { -MD5_Update(, buffer.data(), ret); +EVP_DigestUpdate(context, buffer.data(), ret); ret_val.second += gsl::narrow(ret); } } while (ret > 0); if (ret_val.second > 0) { - std::array digest{}; - MD5_Final(reinterpret_cast(digest.data()), ); + std::array digest{}; + EVP_DigestFinal_ex(context, reinterpret_cast(digest.data()), nullptr); Review Comment: The array is sized for SHA512 now. Did you verify that the resulting hash is trimmed to MD5 size? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-11763) evaluateELString not evaluating ContextParameters
Dirk Hennig created NIFI-11763: -- Summary: evaluateELString not evaluating ContextParameters Key: NIFI-11763 URL: https://issues.apache.org/jira/browse/NIFI-11763 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.22.0, 1.21.0 Environment: Linux NiFi 3 nodes cluster - 8 CPU core +32 GB of RAM per node Reporter: Dirk Hennig Attachments: Complex_UpdateAttribute_Test.json When using Expression Language it was possible to lookup parameters from ContextParameters and variables from VariableRegistry. Examples: ${ literal('#\{Tenant-Testing-HOST1}'):evaluateELString() } ${ literal('${Testvar}'):evaluateELString() } Both were working in NiFi 1.18.0. Now with NiFi version 1.21.0 this stopped working for ContextParameters and only lookups of varaibles are working. Please see the attached template with an example flow. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] ferencerdei commented on a diff in pull request #7448: NIFI-11761 Fixed MiNiFi restart issue when graceful shutdown period expires. MiNiFi restart sends bootstrap to background
ferencerdei commented on code in PR #7448: URL: https://github.com/apache/nifi/pull/7448#discussion_r1246230431 ## minifi/minifi-bootstrap/src/main/java/org/apache/nifi/minifi/bootstrap/command/StopRunner.java: ## @@ -112,6 +114,19 @@ private void gracefulShutDownMiNiFiProcess(long minifiPid) throws IOException { if (minifiPid != UNINITIALIZED) { processUtils.shutdownProcess(minifiPid, "MiNiFi has not finished shutting down after {} seconds. Killing process.", gracefulShutdownParameterProvider.getGracefulShutdownSeconds()); +int maxRetry = 5; +while (processUtils.isProcessRunning(minifiPid)) { Review Comment: What do you think about putting this into the processUtils' killProcessTree method? it is called from multiple places (from the stopRunner directly, and from the shutdownProcess method as well) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mark-bathori opened a new pull request, #7449: NIFI-11334: PutIceberg processor instance interference due same class loader usage
mark-bathori opened a new pull request, #7449: URL: https://github.com/apache/nifi/pull/7449 # Summary [NIFI-11334](https://issues.apache.org/jira/browse/NIFI-11334) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [x] Build completed using `mvn clean install -P contrib-check` - [x] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] briansolo1985 opened a new pull request, #7448: NIFI-11761 Fixed MiNiFi restart issue when graceful shutdown period expires. MiNiFi restart sends bootstrap to background
briansolo1985 opened a new pull request, #7448: URL: https://github.com/apache/nifi/pull/7448 # Summary [NIFI-11761](https://issues.apache.org/jira/browse/NIFI-11761) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [x] Build completed using `mvn clean install -P contrib-check` - [x] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org