[jira] [Created] (NIFI-9528) NiFi Cluster error when using managed-authorizer in 1.15.1 version
Sandip Singh created NIFI-9528: -- Summary: NiFi Cluster error when using managed-authorizer in 1.15.1 version Key: NIFI-9528 URL: https://issues.apache.org/jira/browse/NIFI-9528 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.15.1 Reporter: Sandip Singh I was trying to set up NiFi cluster, version 1.15.1, using NiFi-toolkit certificates in server/client mode with two nodes on two different AWS EC2 instances using nifi.security.user.authorizer=managed-authorizer instead of default {{single-user-authorizer}} and commenting out the the Single User Authorizer definition from login-identity-providers.xml but NiFi fails to start and throws following exception in the nifi-app.log: _...org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration': Unsatisfied dependency expressed through method 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is org.springframework.beans.factory.BeanExpressionException: Expression parsing failed; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency expressed through method 'setJwtAuthenticationProvider' parameter 0_ Exactly same configuration works fine on NiFi 1.13.2 version. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] github-actions[bot] closed pull request #5321: NIFI-8960 Create ability for EvaluateJsonPath processor to match any …
github-actions[bot] closed pull request #5321: URL: https://github.com/apache/nifi/pull/5321 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-9525) RPM build does not produce working Nifi
[ https://issues.apache.org/jira/browse/NIFI-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17468200#comment-17468200 ] Gregory M. Foreman commented on NIFI-9525: -- [~joewitt] Yes, happy to assist. > RPM build does not produce working Nifi > --- > > Key: NIFI-9525 > URL: https://issues.apache.org/jira/browse/NIFI-9525 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.1, 1.15.2 > Environment: Centos 7 >Reporter: Gregory M. Foreman >Priority: Major > > Maven RPM build fails to produce an operational Nifi installation. > > > > > {code:bash} > $ mvn -version > Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537) > Maven home: /opt/maven > Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", > family: "unix" > $ mvn clean install -Prpm -DskipTests > $ yum localinstall > nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm > $ /opt/nifi/nifi-1.15.1/bin/nifi.sh start > nifi.sh: JAVA_HOME not set; results may vary > Java home: > NiFi home: /opt/nifi/nifi-1.15.1 > Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/nifi/security/util/TlsConfiguration > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:756) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369) > at java.net.URLClassLoader$1.run(URLClassLoader.java:363) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:362) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > at > org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124) > at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247) > at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289) > Caused by: java.lang.ClassNotFoundException: > org.apache.nifi.security.util.TlsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:387) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > ... 15 more > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (NIFI-9525) RPM build does not produce working Nifi
[ https://issues.apache.org/jira/browse/NIFI-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17468192#comment-17468192 ] Joe Witt commented on NIFI-9525: [~gforeman02] Hmm that would be awesome and far easier to maintain going forward if I am following this correctly. Are you interested in filing a PR for this? Great idea there if this works out! > RPM build does not produce working Nifi > --- > > Key: NIFI-9525 > URL: https://issues.apache.org/jira/browse/NIFI-9525 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.1, 1.15.2 > Environment: Centos 7 >Reporter: Gregory M. Foreman >Priority: Major > > Maven RPM build fails to produce an operational Nifi installation. > > > > > {code:bash} > $ mvn -version > Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537) > Maven home: /opt/maven > Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", > family: "unix" > $ mvn clean install -Prpm -DskipTests > $ yum localinstall > nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm > $ /opt/nifi/nifi-1.15.1/bin/nifi.sh start > nifi.sh: JAVA_HOME not set; results may vary > Java home: > NiFi home: /opt/nifi/nifi-1.15.1 > Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/nifi/security/util/TlsConfiguration > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:756) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369) > at java.net.URLClassLoader$1.run(URLClassLoader.java:363) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:362) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > at > org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124) > at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247) > at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289) > Caused by: java.lang.ClassNotFoundException: > org.apache.nifi.security.util.TlsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:387) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > ... 15 more > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (NIFI-9525) RPM build does not produce working Nifi
[ https://issues.apache.org/jira/browse/NIFI-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory M. Foreman updated NIFI-9525: - Affects Version/s: 1.15.2 > RPM build does not produce working Nifi > --- > > Key: NIFI-9525 > URL: https://issues.apache.org/jira/browse/NIFI-9525 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.1, 1.15.2 > Environment: Centos 7 >Reporter: Gregory M. Foreman >Priority: Major > > Maven RPM build fails to produce an operational Nifi installation. > > > > > {code:bash} > $ mvn -version > Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537) > Maven home: /opt/maven > Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", > family: "unix" > $ mvn clean install -Prpm -DskipTests > $ yum localinstall > nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm > $ /opt/nifi/nifi-1.15.1/bin/nifi.sh start > nifi.sh: JAVA_HOME not set; results may vary > Java home: > NiFi home: /opt/nifi/nifi-1.15.1 > Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/nifi/security/util/TlsConfiguration > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:756) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369) > at java.net.URLClassLoader$1.run(URLClassLoader.java:363) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:362) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > at > org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124) > at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247) > at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289) > Caused by: java.lang.ClassNotFoundException: > org.apache.nifi.security.util.TlsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:387) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > ... 15 more > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (NIFI-9525) RPM build does not produce working Nifi
[ https://issues.apache.org/jira/browse/NIFI-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17468168#comment-17468168 ] Gregory M. Foreman edited comment on NIFI-9525 at 1/3/22, 7:44 PM: --- The lib directories differ between the maven build and the rpm package. The nifi-assembly pom contains a number of include/exclude package statements to assemble the lib directory for the rpm. I replaced the existing mapping for the lib directory with the entry below and it ran fine: {code:xml} /opt/nifi/nifi-${project.version}/lib ${project.build.directory}/nifi-${project.version}-bin/nifi-${project.version}/lib {code} Is this an option? Or is the include/exclude approach used for a specific reason? was (Author: gforeman02): The lib directories differ between the maven build and the rpm package. The nifi-assembly pom contains a number of include/exclude package statements to assemble the lib directory for the rpm. I replaced the existing mapping for the lib directory with the entry below and it ran fine: {code:XML} /opt/nifi/nifi-${project.version}/lib ${project.build.directory}/nifi-${project.version}-bin/nifi-${project.version}/lib {code} Is this an option? Or are the include/exclude approach used for a specific reason? > RPM build does not produce working Nifi > --- > > Key: NIFI-9525 > URL: https://issues.apache.org/jira/browse/NIFI-9525 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.1 > Environment: Centos 7 >Reporter: Gregory M. Foreman >Priority: Major > > Maven RPM build fails to produce an operational Nifi installation. > > > > > {code:bash} > $ mvn -version > Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537) > Maven home: /opt/maven > Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", > family: "unix" > $ mvn clean install -Prpm -DskipTests > $ yum localinstall > nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm > $ /opt/nifi/nifi-1.15.1/bin/nifi.sh start > nifi.sh: JAVA_HOME not set; results may vary > Java home: > NiFi home: /opt/nifi/nifi-1.15.1 > Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/nifi/security/util/TlsConfiguration > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:756) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369) > at java.net.URLClassLoader$1.run(URLClassLoader.java:363) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:362) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > at > org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124) > at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247) > at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289) > Caused by: java.lang.ClassNotFoundException: > org.apache.nifi.security.util.TlsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:387) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > ... 15 more > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (NIFI-9525) RPM build does not produce working Nifi
[ https://issues.apache.org/jira/browse/NIFI-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17468168#comment-17468168 ] Gregory M. Foreman commented on NIFI-9525: -- The lib directories differ between the maven build and the rpm package. The nifi-assembly pom contains a number of include/exclude package statements to assemble the lib directory for the rpm. I replaced the existing mapping for the lib directory with the entry below and it ran fine: {code:XML} /opt/nifi/nifi-${project.version}/lib ${project.build.directory}/nifi-${project.version}-bin/nifi-${project.version}/lib {code} Is this an option? Or are the include/exclude approach used for a specific reason? > RPM build does not produce working Nifi > --- > > Key: NIFI-9525 > URL: https://issues.apache.org/jira/browse/NIFI-9525 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.1 > Environment: Centos 7 >Reporter: Gregory M. Foreman >Priority: Major > > Maven RPM build fails to produce an operational Nifi installation. > > > > > {code:bash} > $ mvn -version > Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537) > Maven home: /opt/maven > Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: > /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", > family: "unix" > $ mvn clean install -Prpm -DskipTests > $ yum localinstall > nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm > $ /opt/nifi/nifi-1.15.1/bin/nifi.sh start > nifi.sh: JAVA_HOME not set; results may vary > Java home: > NiFi home: /opt/nifi/nifi-1.15.1 > Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf > Exception in thread "main" java.lang.NoClassDefFoundError: > org/apache/nifi/security/util/TlsConfiguration > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:756) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) > at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) > at java.net.URLClassLoader.access$100(URLClassLoader.java:74) > at java.net.URLClassLoader$1.run(URLClassLoader.java:369) > at java.net.URLClassLoader$1.run(URLClassLoader.java:363) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:362) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > at > org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124) > at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247) > at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289) > Caused by: java.lang.ClassNotFoundException: > org.apache.nifi.security.util.TlsConfiguration > at java.net.URLClassLoader.findClass(URLClassLoader.java:387) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > ... 15 more > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi-minifi-cpp] szaszm closed pull request #1229: MINIFICPP-1704 - Update version number to 0.12.0
szaszm closed pull request #1229: URL: https://github.com/apache/nifi-minifi-cpp/pull/1229 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1208: MINIFICPP-1678 Create PutUDP processor
szaszm commented on a change in pull request #1208: URL: https://github.com/apache/nifi-minifi-cpp/pull/1208#discussion_r777625820 ## File path: extensions/standard-processors/processors/PutUDP.h ## @@ -0,0 +1,54 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once +#include +#include +#include +#include + +#include "Processor.h" +#include "utils/Export.h" + +namespace org::apache::nifi::minifi::core::logging { class Logger; } + +namespace org::apache::nifi::minifi::processors { +class PutUDP final : public core::Processor { + public: + EXTENSIONAPI static const core::Property Hostname; + EXTENSIONAPI static const core::Property Port; + + EXTENSIONAPI static const core::Relationship Success; + EXTENSIONAPI static const core::Relationship Failure; + + explicit PutUDP(const std::string& name, const utils::Identifier& uuid = {}); + PutUDP(const PutUDP&) = delete; + PutUDP& operator=(const PutUDP&) = delete; + ~PutUDP() final; + + void initialize() final; + void notifyStop() final; + void onSchedule(core::ProcessContext*, core::ProcessSessionFactory *) final; + void onTrigger(core::ProcessContext*, core::ProcessSession*) final; + + core::annotation::Input getInputRequirement() const noexcept final { return core::annotation::Input::INPUT_REQUIRED; } + bool isSingleThreaded() const noexcept final { return true; /* for now */ } + private: + std::string hostname_; + std::string port_; // Can be a service name, like http or ssh Review comment: moved in 30cc7b4 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1208: MINIFICPP-1678 Create PutUDP processor
szaszm commented on a change in pull request #1208: URL: https://github.com/apache/nifi-minifi-cpp/pull/1208#discussion_r777622464 ## File path: extensions/standard-processors/tests/unit/PutUDPTests.cpp ## @@ -0,0 +1,112 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include +#include +#include +#include "SingleInputTestController.h" +#include "PutUDP.h" +#include "utils/net/DNS.h" +#include "utils/net/Socket.h" +#include "utils/expected.h" +#include "utils/StringUtils.h" + +namespace org::apache::nifi::minifi::processors { + +namespace { +struct DatagramListener { + DatagramListener(const char* const hostname, const char* const port) +:resolved_names_{utils::net::resolveHost(hostname, port, utils::net::IpProtocol::Udp).value()}, + open_socket_{utils::net::open_socket(resolved_names_.get()) +| utils::valueOrElse([=]() -> utils::net::OpenSocketResult { throw std::runtime_error{utils::StringUtils::join_pack("Failed to connect to ", hostname, " on port ", port)}; })} + { +const auto bind_result = bind(open_socket_.socket_.get(), open_socket_.selected_name->ai_addr, open_socket_.selected_name->ai_addrlen); +if (bind_result == utils::net::SocketError) { + throw std::runtime_error{utils::StringUtils::join_pack("bind: ", utils::net::get_last_socket_error().message())}; +} + } + + struct ReceiveResult { +std::string remote_address; +std::string message; + }; + + [[nodiscard]] ReceiveResult receive(const size_t max_message_size = 8192) const { +ReceiveResult result; +result.message.resize(max_message_size); +sockaddr_storage remote_address{}; +socklen_t addrlen = sizeof(remote_address); +const auto recv_result = recvfrom(open_socket_.socket_.get(), result.message.data(), result.message.size(), 0, std::launder(reinterpret_cast(&remote_address)), &addrlen); +if (recv_result == utils::net::SocketError) { + throw std::runtime_error{utils::StringUtils::join_pack("recvfrom: ", utils::net::get_last_socket_error().message())}; +} +result.message.resize(gsl::narrow(recv_result)); +result.remote_address = utils::net::sockaddr_ntop(std::launder(reinterpret_cast(&remote_address))); +return result; + } + + std::unique_ptr resolved_names_; + utils::net::OpenSocketResult open_socket_; +}; +} // namespace + +// Testing the failure relationship is not required, because since UDP in general without guarantees, flow files are always routed to success, unless there is +// some weird IO error with the content repo. +TEST_CASE("PutUDP", "[putudp]") { + const auto putudp = std::make_shared("PutUDP"); + auto random_engine = std::mt19937{std::random_device{}()}; // NOLINT: "Missing space before { [whitespace/braces] [5]" + // most systems use ports 32768 - 65535 as ephemeral ports, so avoid binding to those + const auto port = std::uniform_int_distribution{1, 32768 - 1}(random_engine); + const auto port_str = std::to_string(port); + + test::SingleInputTestController controller{putudp}; + LogTestController::getInstance().setTrace(); + LogTestController::getInstance().setTrace(); + LogTestController::getInstance().setLevelByClassName(spdlog::level::trace, "org::apache::nifi::minifi::core::ProcessContextExpr"); + putudp->setProperty(PutUDP::Hostname, "${literal('localhost')}"); + putudp->setProperty(PutUDP::Port, utils::StringUtils::join_pack("${literal('", port_str, "')}")); + + DatagramListener listener{"localhost", port_str.c_str()}; + + { +const char* const message = "first message: hello"; +const auto result = controller.trigger(message); +const auto& success_flow_files = result.at(PutUDP::Success); +REQUIRE(success_flow_files.size() == 1); +REQUIRE(result.at(PutUDP::Failure).empty()); +REQUIRE(controller.plan->getContent(success_flow_files[0]) == message); +auto receive_result = listener.receive(); +REQUIRE(receive_result.message == message); +REQUIRE(!receive_result.remote_address.empty()); + } + + { Review comment: I just wanted to minimize the chance of collision and avoid SO_REUSEADDR, because [it can cause problems](https://stackoverflow.com/a/3233022).
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1208: MINIFICPP-1678 Create PutUDP processor
fgerlits commented on a change in pull request #1208: URL: https://github.com/apache/nifi-minifi-cpp/pull/1208#discussion_r777617491 ## File path: extensions/standard-processors/tests/unit/PutUDPTests.cpp ## @@ -0,0 +1,112 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include +#include +#include +#include "SingleInputTestController.h" +#include "PutUDP.h" +#include "utils/net/DNS.h" +#include "utils/net/Socket.h" +#include "utils/expected.h" +#include "utils/StringUtils.h" + +namespace org::apache::nifi::minifi::processors { + +namespace { +struct DatagramListener { + DatagramListener(const char* const hostname, const char* const port) +:resolved_names_{utils::net::resolveHost(hostname, port, utils::net::IpProtocol::Udp).value()}, + open_socket_{utils::net::open_socket(resolved_names_.get()) +| utils::valueOrElse([=]() -> utils::net::OpenSocketResult { throw std::runtime_error{utils::StringUtils::join_pack("Failed to connect to ", hostname, " on port ", port)}; })} + { +const auto bind_result = bind(open_socket_.socket_.get(), open_socket_.selected_name->ai_addr, open_socket_.selected_name->ai_addrlen); +if (bind_result == utils::net::SocketError) { + throw std::runtime_error{utils::StringUtils::join_pack("bind: ", utils::net::get_last_socket_error().message())}; +} + } + + struct ReceiveResult { +std::string remote_address; +std::string message; + }; + + [[nodiscard]] ReceiveResult receive(const size_t max_message_size = 8192) const { +ReceiveResult result; +result.message.resize(max_message_size); +sockaddr_storage remote_address{}; +socklen_t addrlen = sizeof(remote_address); +const auto recv_result = recvfrom(open_socket_.socket_.get(), result.message.data(), result.message.size(), 0, std::launder(reinterpret_cast(&remote_address)), &addrlen); +if (recv_result == utils::net::SocketError) { + throw std::runtime_error{utils::StringUtils::join_pack("recvfrom: ", utils::net::get_last_socket_error().message())}; +} +result.message.resize(gsl::narrow(recv_result)); +result.remote_address = utils::net::sockaddr_ntop(std::launder(reinterpret_cast(&remote_address))); +return result; + } + + std::unique_ptr resolved_names_; + utils::net::OpenSocketResult open_socket_; +}; +} // namespace + +// Testing the failure relationship is not required, because since UDP in general without guarantees, flow files are always routed to success, unless there is +// some weird IO error with the content repo. +TEST_CASE("PutUDP", "[putudp]") { + const auto putudp = std::make_shared("PutUDP"); + auto random_engine = std::mt19937{std::random_device{}()}; // NOLINT: "Missing space before { [whitespace/braces] [5]" + // most systems use ports 32768 - 65535 as ephemeral ports, so avoid binding to those + const auto port = std::uniform_int_distribution{1, 32768 - 1}(random_engine); + const auto port_str = std::to_string(port); + + test::SingleInputTestController controller{putudp}; + LogTestController::getInstance().setTrace(); + LogTestController::getInstance().setTrace(); + LogTestController::getInstance().setLevelByClassName(spdlog::level::trace, "org::apache::nifi::minifi::core::ProcessContextExpr"); + putudp->setProperty(PutUDP::Hostname, "${literal('localhost')}"); + putudp->setProperty(PutUDP::Port, utils::StringUtils::join_pack("${literal('", port_str, "')}")); + + DatagramListener listener{"localhost", port_str.c_str()}; + + { +const char* const message = "first message: hello"; +const auto result = controller.trigger(message); +const auto& success_flow_files = result.at(PutUDP::Success); +REQUIRE(success_flow_files.size() == 1); +REQUIRE(result.at(PutUDP::Failure).empty()); +REQUIRE(controller.plan->getContent(success_flow_files[0]) == message); +auto receive_result = listener.receive(); +REQUIRE(receive_result.message == message); +REQUIRE(!receive_result.remote_address.empty()); + } + + { Review comment: Once `UniqueSocketHandle` has a destructor, `DatagramListener` will close the socket on destruction, so we will still only have two sockets op
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1208: MINIFICPP-1678 Create PutUDP processor
fgerlits commented on a change in pull request #1208: URL: https://github.com/apache/nifi-minifi-cpp/pull/1208#discussion_r777617491 ## File path: extensions/standard-processors/tests/unit/PutUDPTests.cpp ## @@ -0,0 +1,112 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include +#include +#include +#include "SingleInputTestController.h" +#include "PutUDP.h" +#include "utils/net/DNS.h" +#include "utils/net/Socket.h" +#include "utils/expected.h" +#include "utils/StringUtils.h" + +namespace org::apache::nifi::minifi::processors { + +namespace { +struct DatagramListener { + DatagramListener(const char* const hostname, const char* const port) +:resolved_names_{utils::net::resolveHost(hostname, port, utils::net::IpProtocol::Udp).value()}, + open_socket_{utils::net::open_socket(resolved_names_.get()) +| utils::valueOrElse([=]() -> utils::net::OpenSocketResult { throw std::runtime_error{utils::StringUtils::join_pack("Failed to connect to ", hostname, " on port ", port)}; })} + { +const auto bind_result = bind(open_socket_.socket_.get(), open_socket_.selected_name->ai_addr, open_socket_.selected_name->ai_addrlen); +if (bind_result == utils::net::SocketError) { + throw std::runtime_error{utils::StringUtils::join_pack("bind: ", utils::net::get_last_socket_error().message())}; +} + } + + struct ReceiveResult { +std::string remote_address; +std::string message; + }; + + [[nodiscard]] ReceiveResult receive(const size_t max_message_size = 8192) const { +ReceiveResult result; +result.message.resize(max_message_size); +sockaddr_storage remote_address{}; +socklen_t addrlen = sizeof(remote_address); +const auto recv_result = recvfrom(open_socket_.socket_.get(), result.message.data(), result.message.size(), 0, std::launder(reinterpret_cast(&remote_address)), &addrlen); +if (recv_result == utils::net::SocketError) { + throw std::runtime_error{utils::StringUtils::join_pack("recvfrom: ", utils::net::get_last_socket_error().message())}; +} +result.message.resize(gsl::narrow(recv_result)); +result.remote_address = utils::net::sockaddr_ntop(std::launder(reinterpret_cast(&remote_address))); +return result; + } + + std::unique_ptr resolved_names_; + utils::net::OpenSocketResult open_socket_; +}; +} // namespace + +// Testing the failure relationship is not required, because since UDP in general without guarantees, flow files are always routed to success, unless there is +// some weird IO error with the content repo. +TEST_CASE("PutUDP", "[putudp]") { + const auto putudp = std::make_shared("PutUDP"); + auto random_engine = std::mt19937{std::random_device{}()}; // NOLINT: "Missing space before { [whitespace/braces] [5]" + // most systems use ports 32768 - 65535 as ephemeral ports, so avoid binding to those + const auto port = std::uniform_int_distribution{1, 32768 - 1}(random_engine); + const auto port_str = std::to_string(port); + + test::SingleInputTestController controller{putudp}; + LogTestController::getInstance().setTrace(); + LogTestController::getInstance().setTrace(); + LogTestController::getInstance().setLevelByClassName(spdlog::level::trace, "org::apache::nifi::minifi::core::ProcessContextExpr"); + putudp->setProperty(PutUDP::Hostname, "${literal('localhost')}"); + putudp->setProperty(PutUDP::Port, utils::StringUtils::join_pack("${literal('", port_str, "')}")); + + DatagramListener listener{"localhost", port_str.c_str()}; + + { +const char* const message = "first message: hello"; +const auto result = controller.trigger(message); +const auto& success_flow_files = result.at(PutUDP::Success); +REQUIRE(success_flow_files.size() == 1); +REQUIRE(result.at(PutUDP::Failure).empty()); +REQUIRE(controller.plan->getContent(success_flow_files[0]) == message); +auto receive_result = listener.receive(); +REQUIRE(receive_result.message == message); +REQUIRE(!receive_result.remote_address.empty()); + } + + { Review comment: Once `UniqueSocketHandle` has a destructor, `DatagramListener` will close the socket on destruction, so we will still only have two sockets op
[GitHub] [nifi-minifi-cpp] fgerlits commented on a change in pull request #1208: MINIFICPP-1678 Create PutUDP processor
fgerlits commented on a change in pull request #1208: URL: https://github.com/apache/nifi-minifi-cpp/pull/1208#discussion_r777609848 ## File path: extensions/standard-processors/tests/unit/PutUDPTests.cpp ## @@ -0,0 +1,112 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include +#include +#include +#include "SingleInputTestController.h" +#include "PutUDP.h" +#include "utils/net/DNS.h" +#include "utils/net/Socket.h" +#include "utils/expected.h" +#include "utils/StringUtils.h" + +namespace org::apache::nifi::minifi::processors { + +namespace { +struct DatagramListener { + DatagramListener(const char* const hostname, const char* const port) +:resolved_names_{utils::net::resolveHost(hostname, port, utils::net::IpProtocol::Udp).value()}, + open_socket_{utils::net::open_socket(resolved_names_.get()) +| utils::valueOrElse([=]() -> utils::net::OpenSocketResult { throw std::runtime_error{utils::StringUtils::join_pack("Failed to connect to ", hostname, " on port ", port)}; })} + { +const auto bind_result = bind(open_socket_.socket_.get(), open_socket_.selected_name->ai_addr, open_socket_.selected_name->ai_addrlen); +if (bind_result == utils::net::SocketError) { + throw std::runtime_error{utils::StringUtils::join_pack("bind: ", utils::net::get_last_socket_error().message())}; +} + } + + struct ReceiveResult { +std::string remote_address; +std::string message; + }; + + [[nodiscard]] ReceiveResult receive(const size_t max_message_size = 8192) const { +ReceiveResult result; +result.message.resize(max_message_size); +sockaddr_storage remote_address{}; +socklen_t addrlen = sizeof(remote_address); +const auto recv_result = recvfrom(open_socket_.socket_.get(), result.message.data(), result.message.size(), 0, std::launder(reinterpret_cast(&remote_address)), &addrlen); Review comment: makes sense, thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1208: MINIFICPP-1678 Create PutUDP processor
szaszm commented on a change in pull request #1208: URL: https://github.com/apache/nifi-minifi-cpp/pull/1208#discussion_r777609050 ## File path: libminifi/src/utils/net/Socket.cpp ## @@ -0,0 +1,72 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "utils/net/Socket.h" +#include "Exception.h" +#include +#include +#ifdef WIN32 +#ifndef WIN32_LEAN_AND_MEAN +#define WIN32_LEAN_AND_MEAN +#endif /* WIN32_LEAN_AND_MEAN */ +#include +#else +#include +#endif /* WIN32 */ + +namespace org::apache::nifi::minifi::utils::net { +std::error_code get_last_socket_error() { +#ifdef WIN32 + const auto error_code = WSAGetLastError(); +#else + const auto error_code = errno; +#endif /* WIN32 */ + return {error_code, std::system_category()}; +} + +nonstd::expected open_socket(const addrinfo* const getaddrinfo_result) { + for (const addrinfo* it = getaddrinfo_result; it; it = it->ai_next) { +const auto fd = socket(it->ai_family, it->ai_socktype, it->ai_protocol); +if (fd != utils::net::InvalidSocket) return OpenSocketResult{UniqueSocketHandle{fd}, gsl::make_not_null(it)}; + } + return nonstd::make_unexpected(get_last_socket_error()); +} + +std::string sockaddr_ntop(const sockaddr* const sa) { + std::string result; + if (sa->sa_family == AF_INET) { +sockaddr_in sa_in{}; +std::memcpy(reinterpret_cast(&sa_in), sa, sizeof(sockaddr_in)); Review comment: Good point, removing. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1208: MINIFICPP-1678 Create PutUDP processor
szaszm commented on a change in pull request #1208: URL: https://github.com/apache/nifi-minifi-cpp/pull/1208#discussion_r777608909 ## File path: libminifi/include/utils/net/Socket.h ## @@ -0,0 +1,104 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once +#include +#include +#ifdef WIN32 +#ifndef WIN32_LEAN_AND_MEAN +#define WIN32_LEAN_AND_MEAN +#endif /* WIN32_LEAN_AND_MEAN */ +#include +#else +#include +#include +#include +#include +#endif /* WIN32 */ +#include "nonstd/expected.hpp" +#include "utils/gsl.h" + +namespace org::apache::nifi::minifi::utils::net { +#ifdef WIN32 +using SocketDescriptor = SOCKET; +using ip4addr = in_addr; +inline constexpr SocketDescriptor InvalidSocket = INVALID_SOCKET; +constexpr int SocketError = SOCKET_ERROR; +#else +using SocketDescriptor = int; +using ip4addr = in_addr_t; +#undef INVALID_SOCKET +inline constexpr SocketDescriptor InvalidSocket = -1; +#undef SOCKET_ERROR +inline constexpr int SocketError = -1; +#endif /* WIN32 */ + +/** + * Return the last socket error code, based on errno on posix and WSAGetLastError() on windows. + */ +std::error_code get_last_socket_error(); + +inline void close_socket(SocketDescriptor sockfd) { +#ifdef WIN32 + closesocket(sockfd); +#else + ::close(sockfd); +#endif +} + +class UniqueSocketHandle { + public: + explicit UniqueSocketHandle(SocketDescriptor owner_sockfd) noexcept + :owner_sockfd_(owner_sockfd) + {} + Review comment: It was an oversight, thanks for raising this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1208: MINIFICPP-1678 Create PutUDP processor
szaszm commented on a change in pull request #1208: URL: https://github.com/apache/nifi-minifi-cpp/pull/1208#discussion_r777608142 ## File path: libminifi/src/utils/net/DNS.cpp ## @@ -0,0 +1,93 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#include "utils/net/DNS.h" +#include "Exception.h" +#include "utils/StringUtils.h" + +#ifdef WIN32 +#ifndef WIN32_LEAN_AND_MEAN +#define WIN32_LEAN_AND_MEAN +#endif +#include +#include +#include "utils/net/Socket.h" +#else +#include +#include +#endif /* WIN32 */ + +namespace org::apache::nifi::minifi::utils::net { + +namespace { + +#ifndef WIN32 +class addrinfo_category : public std::error_category { + public: + [[nodiscard]] const char* name() const noexcept override { return "addrinfo"; } + + [[nodiscard]] std::string message(int value) const override { +return gai_strerror(value); + } +}; + +const addrinfo_category& get_addrinfo_category() { + static addrinfo_category instance; + return instance; +} +#endif + +std::error_code get_last_getaddrinfo_err_code(int getaddrinfo_result) { +#ifdef WIN32 + (void)getaddrinfo_result; // against unused warnings on windows + return std::error_code{WSAGetLastError(), std::system_category()}; +#else + return std::error_code{getaddrinfo_result, get_addrinfo_category()}; +#endif /* WIN32 */ +} +} // namespace + +void addrinfo_deleter::operator()(addrinfo* const p) const noexcept { + freeaddrinfo(p); +} + +nonstd::expected, std::error_code> resolveHost(const char* const hostname, const char* const port, const IpProtocol protocol, const bool need_canonname) { + addrinfo hints{}; + memset(&hints, 0, sizeof hints); // make sure the struct is empty + hints.ai_family = AF_UNSPEC; + hints.ai_socktype = protocol == IpProtocol::Tcp ? SOCK_STREAM : SOCK_DGRAM; + hints.ai_flags = need_canonname ? AI_CANONNAME : 0; + if (!hostname) +hints.ai_flags |= AI_PASSIVE; + hints.ai_protocol = [protocol]() -> int { +switch (protocol) { + case IpProtocol::Tcp: return IPPROTO_TCP; + case IpProtocol::Udp: return IPPROTO_UDP; Review comment: I prefer to hide this detail from the interface as long as possible. I meant to use this enumerator as a way to hide the BSD/POSIX Sockets API macros from the API. If there is ever going to be another usage, then this goal will need to be reevaluated and probably the interface of both usages redesigned. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-9527) nifi-hive-nar fails to load because of the old snappy-java jar
Saurabh B created NIFI-9527: --- Summary: nifi-hive-nar fails to load because of the old snappy-java jar Key: NIFI-9527 URL: https://issues.apache.org/jira/browse/NIFI-9527 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.15.0, 1.15.2 Environment: Linux 3.10.0-1160.49.1.el7.x86_64 #1 SMP Tue Nov 9 16:09:48 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux Reporter: Saurabh B nifi-hive-nar has old version of snappy-java jar (snappy-java-1.0.5.jar) which fails to load. New version of snappy-java-1.1.8.4.jar works. {{2022-01-03 12:11:19,126 ERROR [main] org.apache.nifi.NiFi Failure to launch NiFi}} {{org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null}} {{ at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:239)}} {{ at org.xerial.snappy.Snappy.(Snappy.java:48)}} {{ at org.apache.nifi.processors.hive.PutHiveStreaming.(PutHiveStreaming.java:158)}} {{ at java.base/java.lang.Class.forName0(Native Method)}} {{ at java.base/java.lang.Class.forName(Class.java:398)}} {{ at org.apache.nifi.nar.StandardExtensionDiscoveringManager.getClass(StandardExtensionDiscoveringManager.java:330)}} {{ at org.apache.nifi.documentation.DocGenerator.documentConfigurableComponent(DocGenerator.java:100)}} {{ at org.apache.nifi.documentation.DocGenerator.generate(DocGenerator.java:65)}} {{ at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1139)}} {{ at org.apache.nifi.NiFi.(NiFi.java:170)}} {{ at org.apache.nifi.NiFi.(NiFi.java:82)}} {{ at org.apache.nifi.NiFi.main(NiFi.java:330)}} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1208: MINIFICPP-1678 Create PutUDP processor
szaszm commented on a change in pull request #1208: URL: https://github.com/apache/nifi-minifi-cpp/pull/1208#discussion_r777601496 ## File path: extensions/standard-processors/tests/unit/PutUDPTests.cpp ## @@ -0,0 +1,112 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include +#include +#include +#include "SingleInputTestController.h" +#include "PutUDP.h" +#include "utils/net/DNS.h" +#include "utils/net/Socket.h" +#include "utils/expected.h" +#include "utils/StringUtils.h" + +namespace org::apache::nifi::minifi::processors { + +namespace { +struct DatagramListener { + DatagramListener(const char* const hostname, const char* const port) +:resolved_names_{utils::net::resolveHost(hostname, port, utils::net::IpProtocol::Udp).value()}, + open_socket_{utils::net::open_socket(resolved_names_.get()) +| utils::valueOrElse([=]() -> utils::net::OpenSocketResult { throw std::runtime_error{utils::StringUtils::join_pack("Failed to connect to ", hostname, " on port ", port)}; })} + { +const auto bind_result = bind(open_socket_.socket_.get(), open_socket_.selected_name->ai_addr, open_socket_.selected_name->ai_addrlen); +if (bind_result == utils::net::SocketError) { + throw std::runtime_error{utils::StringUtils::join_pack("bind: ", utils::net::get_last_socket_error().message())}; +} + } + + struct ReceiveResult { +std::string remote_address; +std::string message; + }; + + [[nodiscard]] ReceiveResult receive(const size_t max_message_size = 8192) const { +ReceiveResult result; +result.message.resize(max_message_size); +sockaddr_storage remote_address{}; +socklen_t addrlen = sizeof(remote_address); +const auto recv_result = recvfrom(open_socket_.socket_.get(), result.message.data(), result.message.size(), 0, std::launder(reinterpret_cast(&remote_address)), &addrlen); Review comment: It shouldn't be needed, but since the code most likely violates the aliasing rules, I wanted to reduce the possibility of a future compiler messing it up with some optimization. The BSD/POSIX Sockets API was made in the 90s with no regards to aliasing rules in C and C++, and back then it didn't really matter, because compilers were much simpler and rarely if ever took advantage of optimization opportunities that arose as a consequence of having differently typed pointers in the same context, and those can not point to the same memory location. In the intended usage of the sockets API, one is supposed to initialize a memory location as `struct sockaddr_storage`, then fill it in with a function that works with `struct sockaddr_in` or `struct sockaddr_in6` (for IPv4 and IPv6 respectively) while passing it around as `struct sockaddr*`. These structures have compatible layouts, but this doesn't make the practice legal C or C++. You're correct that `std::launder` is not related, but I was hoping that the optimization barrier it creates overlaps with those that are required to make this aliasing violation work in that hypothetical future compiler that takes advantage of this UB. There was a similar usage of `std::launder` in libc++ "just to be safe": https://reviews.llvm.org/D47607. I can't give a good explanation of the proper usage of `std::launder`, because I'm not too familiar with the object model changes of C++17 that introduced it. I think it's used to obtain a valid pointer from an invalidated pointer to the same memory location, or something like this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1208: MINIFICPP-1678 Create PutUDP processor
szaszm commented on a change in pull request #1208: URL: https://github.com/apache/nifi-minifi-cpp/pull/1208#discussion_r777603062 ## File path: extensions/standard-processors/tests/unit/PutUDPTests.cpp ## @@ -0,0 +1,112 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include +#include +#include +#include "SingleInputTestController.h" +#include "PutUDP.h" +#include "utils/net/DNS.h" +#include "utils/net/Socket.h" +#include "utils/expected.h" +#include "utils/StringUtils.h" + +namespace org::apache::nifi::minifi::processors { + +namespace { +struct DatagramListener { + DatagramListener(const char* const hostname, const char* const port) +:resolved_names_{utils::net::resolveHost(hostname, port, utils::net::IpProtocol::Udp).value()}, + open_socket_{utils::net::open_socket(resolved_names_.get()) +| utils::valueOrElse([=]() -> utils::net::OpenSocketResult { throw std::runtime_error{utils::StringUtils::join_pack("Failed to connect to ", hostname, " on port ", port)}; })} + { +const auto bind_result = bind(open_socket_.socket_.get(), open_socket_.selected_name->ai_addr, open_socket_.selected_name->ai_addrlen); +if (bind_result == utils::net::SocketError) { + throw std::runtime_error{utils::StringUtils::join_pack("bind: ", utils::net::get_last_socket_error().message())}; +} + } + + struct ReceiveResult { +std::string remote_address; +std::string message; + }; + + [[nodiscard]] ReceiveResult receive(const size_t max_message_size = 8192) const { +ReceiveResult result; +result.message.resize(max_message_size); +sockaddr_storage remote_address{}; +socklen_t addrlen = sizeof(remote_address); +const auto recv_result = recvfrom(open_socket_.socket_.get(), result.message.data(), result.message.size(), 0, std::launder(reinterpret_cast(&remote_address)), &addrlen); +if (recv_result == utils::net::SocketError) { + throw std::runtime_error{utils::StringUtils::join_pack("recvfrom: ", utils::net::get_last_socket_error().message())}; +} +result.message.resize(gsl::narrow(recv_result)); +result.remote_address = utils::net::sockaddr_ntop(std::launder(reinterpret_cast(&remote_address))); +return result; + } + + std::unique_ptr resolved_names_; + utils::net::OpenSocketResult open_socket_; +}; +} // namespace + +// Testing the failure relationship is not required, because since UDP in general without guarantees, flow files are always routed to success, unless there is +// some weird IO error with the content repo. +TEST_CASE("PutUDP", "[putudp]") { + const auto putudp = std::make_shared("PutUDP"); + auto random_engine = std::mt19937{std::random_device{}()}; // NOLINT: "Missing space before { [whitespace/braces] [5]" + // most systems use ports 32768 - 65535 as ephemeral ports, so avoid binding to those + const auto port = std::uniform_int_distribution{1, 32768 - 1}(random_engine); + const auto port_str = std::to_string(port); + + test::SingleInputTestController controller{putudp}; + LogTestController::getInstance().setTrace(); + LogTestController::getInstance().setTrace(); + LogTestController::getInstance().setLevelByClassName(spdlog::level::trace, "org::apache::nifi::minifi::core::ProcessContextExpr"); + putudp->setProperty(PutUDP::Hostname, "${literal('localhost')}"); + putudp->setProperty(PutUDP::Port, utils::StringUtils::join_pack("${literal('", port_str, "')}")); + + DatagramListener listener{"localhost", port_str.c_str()}; + + { +const char* const message = "first message: hello"; +const auto result = controller.trigger(message); +const auto& success_flow_files = result.at(PutUDP::Success); +REQUIRE(success_flow_files.size() == 1); +REQUIRE(result.at(PutUDP::Failure).empty()); +REQUIRE(controller.plan->getContent(success_flow_files[0]) == message); +auto receive_result = listener.receive(); +REQUIRE(receive_result.message == message); +REQUIRE(!receive_result.remote_address.empty()); + } + + { Review comment: I want to reuse the listener so that the test only needs one port from the host system (+ one for the processor), not two. -- This is an au
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1208: MINIFICPP-1678 Create PutUDP processor
szaszm commented on a change in pull request #1208: URL: https://github.com/apache/nifi-minifi-cpp/pull/1208#discussion_r777601496 ## File path: extensions/standard-processors/tests/unit/PutUDPTests.cpp ## @@ -0,0 +1,112 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include +#include +#include +#include "SingleInputTestController.h" +#include "PutUDP.h" +#include "utils/net/DNS.h" +#include "utils/net/Socket.h" +#include "utils/expected.h" +#include "utils/StringUtils.h" + +namespace org::apache::nifi::minifi::processors { + +namespace { +struct DatagramListener { + DatagramListener(const char* const hostname, const char* const port) +:resolved_names_{utils::net::resolveHost(hostname, port, utils::net::IpProtocol::Udp).value()}, + open_socket_{utils::net::open_socket(resolved_names_.get()) +| utils::valueOrElse([=]() -> utils::net::OpenSocketResult { throw std::runtime_error{utils::StringUtils::join_pack("Failed to connect to ", hostname, " on port ", port)}; })} + { +const auto bind_result = bind(open_socket_.socket_.get(), open_socket_.selected_name->ai_addr, open_socket_.selected_name->ai_addrlen); +if (bind_result == utils::net::SocketError) { + throw std::runtime_error{utils::StringUtils::join_pack("bind: ", utils::net::get_last_socket_error().message())}; +} + } + + struct ReceiveResult { +std::string remote_address; +std::string message; + }; + + [[nodiscard]] ReceiveResult receive(const size_t max_message_size = 8192) const { +ReceiveResult result; +result.message.resize(max_message_size); +sockaddr_storage remote_address{}; +socklen_t addrlen = sizeof(remote_address); +const auto recv_result = recvfrom(open_socket_.socket_.get(), result.message.data(), result.message.size(), 0, std::launder(reinterpret_cast(&remote_address)), &addrlen); Review comment: It shouldn't be needed, but since the code most likely violates the aliasing rules, I wanted to reduce the possibility of a future compiler messing it up with some optimization. The Sockets API was made in the 90s with no regards to aliasing rules in C and C++, and back then it didn't really matter, because compilers were much simpler and rarely if ever took advantage of optimization opportunities that arose as a consequence of having differently typed pointers in the same context, and those can not point to the same memory location. In the intended usage of the sockets API, one is supposed to initialize a memory location as `struct sockaddr_storage`, then fill it in with a function that works with `struct sockaddr_in` or `struct sockaddr_in6` (for IPv4 and IPv6 respectively) while passing it around as `struct sockaddr*`. These structures have compatible layouts, but this doesn't make the practice legal C or C++. You're correct that `std::launder` is not related, but I was hoping that the optimization barrier it creates overlaps with those that are required to make this aliasing violation work in that hypothetical future compiler that takes advantage of this UB. There was a similar usage of `std::launder` in libc++ "just to be safe": https://reviews.llvm.org/D47607. I can't give a good explanation of the proper usage of `std::launder`, because I'm not too familiar with the object model changes of C++17 that introduced it. I think it's used to obtain a valid pointer from an invalidated pointer to the same memory location, or something like this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on a change in pull request #5593: NIFI-9475 Provide Framework-Level Retries for NiFi Relationships
markap14 commented on a change in pull request #5593: URL: https://github.com/apache/nifi/pull/5593#discussion_r777515747 ## File path: nifi-api/src/main/java/org/apache/nifi/processor/ProcessContext.java ## @@ -163,4 +163,8 @@ * @return the configured name of this processor */ String getName(); + +boolean isRetriedRelationship(Relationship relationship); + +int getRetryCounts(); Review comment: Again, should be `getRetryCount()` -- only a single count. Need to make sure we add JavaDocs. ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/StandardProcessorNode.java ## @@ -1863,6 +1873,57 @@ public ScheduledState getDesiredState() { return desiredState; } +@Override +public int getRetryCounts() { +return retryCounts.get(); +} + +@Override +public synchronized void setRetryCounts(int retryCounts) { +this.retryCounts.set(retryCounts); +} + +@Override +public Set getRetriedRelationships() { +if (retriedRelationships.get() == null) { +return new HashSet<>(); +} +return retriedRelationships.get(); +} + +@Override +public synchronized void setRetriedRelationships(Set retriedRelationships) { Review comment: `synchronized` keyword is unnecessary here. ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/repository/StandardProcessSession.java ## @@ -356,6 +408,95 @@ private void checkpoint(final boolean copyCollections) { checkpoint.checkpoint(this, autoTerminatedEvents, copyCollections); } +private boolean isRetryNeeded(final ProcessorNode processorNode, final StandardRepositoryRecord record, final FlowFileRecord currentFlowFile, + final int retryCounts, final Map uuidsToRecords) { +if (currentFlowFile == null || processorNode == null || processorNode.getRetriedRelationships().isEmpty()) { +return false; +} + +if (processorNode.isRetriedRelationship(record.getTransferRelationship())) { +return retryCounts < processorNode.getRetryCounts(); +} + +if (forkEventBuilders.get(currentFlowFile) != null) { +for (String uuid : forkEventBuilders.get(currentFlowFile).getChildFlowFileIds()) { +if (processorNode.isRetriedRelationship(uuidsToRecords.get(uuid).getTransferRelationship())) { +return retryCounts < processorNode.getRetryCounts(); +} +} +} +return false; +} + +private FlowFileRecord updateFlowFileRecord(final StandardRepositoryRecord record, +final Map uuidsToRecords, +final int retryCounts, final FlowFileRecord flowFileRecord) { + +removeTemporaryClaim(record); +if (forkEventBuilders.get(flowFileRecord) != null) { +for (String uuid : forkEventBuilders.get(flowFileRecord).getChildFlowFileIds()) { +final StandardRepositoryRecord childRecord = uuidsToRecords.get(uuid); +removeTemporaryClaim(childRecord); +createdFlowFiles.remove(uuid); +records.remove(childRecord.getCurrent().getId()); +} +} + +final StandardFlowFileRecord.Builder builder = new StandardFlowFileRecord.Builder().fromFlowFile(flowFileRecord); +record.setTransferRelationship(null); +record.setDestination(record.getOriginalQueue()); + +builder.addAttribute("retryCounts", String.valueOf(retryCounts)); +record.getUpdatedAttributes().clear(); +final FlowFileRecord newFile = builder.build(); +record.setWorking(newFile, false); +return newFile; +} + +private void adjustProcessorStatistics(final StandardRepositoryRecord record, final Relationship relationship, + final Map uuidsToRecords) { +final int numDestinations = context.getConnections(relationship).size(); +final int multiplier = Math.max(1, numDestinations); +final ProvenanceEventBuilder eventBuilder = forkEventBuilders.get(record.getOriginal()); +final List childFlowFileIds = new ArrayList<>(); +int contentSize = 0; + +if (eventBuilder != null) { +childFlowFileIds.addAll(eventBuilder.getChildFlowFileIds()); +for (String uuid : childFlowFileIds) { +contentSize += uuidsToRecords.get(uuid).getCurrent().getSize(); +} +} + +flowFilesIn--; +contentSizeIn -= record.getOriginal().getSize(); +flowFilesOut -= multiplier * (childFlowFileIds.size() + 1); +contentSizeO
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1219: MINIFICPP-1691: PutSplunkHTTP and QuerySplunkIndexingStatus
szaszm commented on a change in pull request #1219: URL: https://github.com/apache/nifi-minifi-cpp/pull/1219#discussion_r777584530 ## File path: extensions/splunk/PutSplunkHTTP.cpp ## @@ -0,0 +1,179 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + + +#include "PutSplunkHTTP.h" + +#include +#include + +#include "SplunkAttributes.h" + +#include "core/Resource.h" +#include "utils/StringUtils.h" +#include "client/HTTPClient.h" +#include "utils/HTTPClient.h" +#include "utils/TimeUtil.h" + +#include "rapidjson/document.h" + + +namespace org::apache::nifi::minifi::extensions::splunk { + +const core::Property PutSplunkHTTP::Source(core::PropertyBuilder::createProperty("Source") +->withDescription("Basic field describing the source of the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::SourceType(core::PropertyBuilder::createProperty("Source Type") +->withDescription("Basic field describing the source type of the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::Host(core::PropertyBuilder::createProperty("Host") +->withDescription("Basic field describing the host of the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::Index(core::PropertyBuilder::createProperty("Index") +->withDescription("Identifies the index where to send the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::ContentType(core::PropertyBuilder::createProperty("Content Type") +->withDescription("The media type of the event sent to Splunk. If not set, \"mime.type\" flow file attribute will be used. " + "In case of neither of them is specified, this information will not be sent to the server.") +->supportsExpressionLanguage(true)->build()); + + +const core::Relationship PutSplunkHTTP::Success("success", "FlowFiles that are sent successfully to the destination are sent to this relationship."); +const core::Relationship PutSplunkHTTP::Failure("failure", "FlowFiles that failed to send to the destination are sent to this relationship."); + +void PutSplunkHTTP::initialize() { + setSupportedRelationships({Success, Failure}); + setSupportedProperties({Hostname, Port, Token, SplunkRequestChannel, Source, SourceType, Host, Index, ContentType}); +} + +void PutSplunkHTTP::onSchedule(const std::shared_ptr& context, const std::shared_ptr& sessionFactory) { + SplunkHECProcessor::onSchedule(context, sessionFactory); +} + + +namespace { +std::optional getContentType(core::ProcessContext& context, const core::FlowFile& flow_file) { + std::optional content_type = context.getProperty(PutSplunkHTTP::ContentType); + if (content_type.has_value()) +return content_type; + return flow_file.getAttribute("mime.key"); +} + + +std::string getEndpoint(core::ProcessContext& context, const gsl::not_null>& flow_file) { + std::stringstream endpoint; + endpoint << "/services/collector/raw"; + std::vector parameters; + std::string prop_value; + if (context.getProperty(PutSplunkHTTP::SourceType, prop_value, flow_file)) { +parameters.push_back("sourcetype=" + prop_value); + } + if (context.getProperty(PutSplunkHTTP::Source, prop_value, flow_file)) { +parameters.push_back("source=" + prop_value); + } + if (context.getProperty(PutSplunkHTTP::Host, prop_value, flow_file)) { +parameters.push_back("host=" + prop_value); + } + if (context.getProperty(PutSplunkHTTP::Index, prop_value, flow_file)) { +parameters.push_back("index=" + prop_value); + } + if (!parameters.empty()) { +endpoint << "?" << utils::StringUtils::join("&", parameters); + } + return endpoint.str(); +} + +bool addAttributesFromClientResponse(core::FlowFile& flow_file, utils::HTTPClient& client) { + rapidjson::Document response_json; + rapidjson::ParseResult parse_result = response_j
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1219: MINIFICPP-1691: PutSplunkHTTP and QuerySplunkIndexingStatus
szaszm commented on a change in pull request #1219: URL: https://github.com/apache/nifi-minifi-cpp/pull/1219#discussion_r777581938 ## File path: libminifi/include/utils/TimeUtil.h ## @@ -37,6 +37,24 @@ namespace minifi { namespace utils { namespace timeutils { +/** + * Converts the time point to the elapsed time since epoch + * @returns TimeUnit since epoch + */ +template +uint64_t getTimestamp(const TimePoint& time_point) { + return std::chrono::duration_cast(time_point.time_since_epoch()).count(); +} + +/** + * Converts the time since epoch into a time point + * @returns the time point matching the input timestamp + */ +template +std::chrono::time_point getTimePoint(uint64_t timestamp) { + return std::chrono::time_point() + TimeUnit(timestamp); +} Review comment: [I criticized these in your other PR](https://github.com/apache/nifi-minifi-cpp/pull/1225#discussion_r767688269), which would apply here, too. If you think that they are useful, feel free to reintroduce them there, but if they don't improve the readability, you may want to consider removing them. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1219: MINIFICPP-1691: PutSplunkHTTP and QuerySplunkIndexingStatus
szaszm commented on a change in pull request #1219: URL: https://github.com/apache/nifi-minifi-cpp/pull/1219#discussion_r777581938 ## File path: libminifi/include/utils/TimeUtil.h ## @@ -37,6 +37,24 @@ namespace minifi { namespace utils { namespace timeutils { +/** + * Converts the time point to the elapsed time since epoch + * @returns TimeUnit since epoch + */ +template +uint64_t getTimestamp(const TimePoint& time_point) { + return std::chrono::duration_cast(time_point.time_since_epoch()).count(); +} + +/** + * Converts the time since epoch into a time point + * @returns the time point matching the input timestamp + */ +template +std::chrono::time_point getTimePoint(uint64_t timestamp) { + return std::chrono::time_point() + TimeUnit(timestamp); +} Review comment: I criticized these in your other PR, which would apply here, too. If you think that they are useful, feel free to reintroduce them there, but if they don't improve the readability, you may want to consider removing them. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1224: MINIFICPP-1698 - Make archive read/write agent-wide available
szaszm commented on a change in pull request #1224: URL: https://github.com/apache/nifi-minifi-cpp/pull/1224#discussion_r777567108 ## File path: extensions/libarchive/WriteArchiveStream.cpp ## @@ -0,0 +1,151 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "WriteArchiveStream.h" + +#include +#include + +#include "core/Resource.h" +#include "ReadArchiveStream.h" + +namespace org::apache::nifi::minifi::io { + +WriteArchiveStreamImpl::archive_ptr WriteArchiveStreamImpl::createWriteArchive() { + archive_ptr arch = archive_write_new(); + if (!arch) { +logger_->log_error("Failed to create write archive"); +return nullptr; + } + + int result; + + result = archive_write_set_format_ustar(arch.get()); Review comment: My preference would be separate scope for all of the result variables. Like an [SSA](https://en.wikipedia.org/wiki/Static_single_assignment_form) form, but done by hand to avoid multiple meanings of the same variable. I can also live with it as is. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1224: MINIFICPP-1698 - Make archive read/write agent-wide available
szaszm commented on a change in pull request #1224: URL: https://github.com/apache/nifi-minifi-cpp/pull/1224#discussion_r777567955 ## File path: extensions/libarchive/WriteArchiveStream.cpp ## @@ -0,0 +1,151 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "WriteArchiveStream.h" + +#include +#include + +#include "core/Resource.h" +#include "ReadArchiveStream.h" + +namespace org::apache::nifi::minifi::io { + +WriteArchiveStreamImpl::archive_ptr WriteArchiveStreamImpl::createWriteArchive() { + archive_ptr arch = archive_write_new(); + if (!arch) { +logger_->log_error("Failed to create write archive"); +return nullptr; + } + + int result; + + result = archive_write_set_format_ustar(arch.get()); + if (result != ARCHIVE_OK) { +logger_->log_error("Archive write set format ustar error %s", archive_error_string(arch.get())); +return nullptr; + } + if (compress_format_ == CompressionFormat::GZIP) { +result = archive_write_add_filter_gzip(arch.get()); +if (result != ARCHIVE_OK) { + logger_->log_error("Archive write add filter gzip error %s", archive_error_string(arch.get())); + return nullptr; +} +std::string option = "gzip:compression-level=" + std::to_string(compress_level_); +result = archive_write_set_options(arch.get(), option.c_str()); +if (result != ARCHIVE_OK) { + logger_->log_error("Archive write set options error %s", archive_error_string(arch.get())); + return nullptr; +} + } else if (compress_format_ == CompressionFormat::BZIP2) { +result = archive_write_add_filter_bzip2(arch.get()); +if (result != ARCHIVE_OK) { + logger_->log_error("Archive write add filter bzip2 error %s", archive_error_string(arch.get())); + return nullptr; +} + } else if (compress_format_ == CompressionFormat::LZMA) { +result = archive_write_add_filter_lzma(arch.get()); +if (result != ARCHIVE_OK) { + logger_->log_error("Archive write add filter lzma error %s", archive_error_string(arch.get())); + return nullptr; +} + } else if (compress_format_ == CompressionFormat::XZ_LZMA2) { +result = archive_write_add_filter_xz(arch.get()); +if (result != ARCHIVE_OK) { + logger_->log_error("Archive write add filter xz error %s", archive_error_string(arch.get())); + return nullptr; +} + } else { +logger_->log_error("Archive write unsupported compression format"); +return nullptr; + } + result = archive_write_set_bytes_per_block(arch.get(), 0); + if (result != ARCHIVE_OK) { +logger_->log_error("Archive write set bytes per block error %s", archive_error_string(arch.get())); +return nullptr; + } + result = archive_write_open(arch.get(), sink_.get(), nullptr, archive_write, nullptr); + if (result != ARCHIVE_OK) { +logger_->log_error("Archive write open error %s", archive_error_string(arch.get())); +return nullptr; + } + return arch; +} + +bool WriteArchiveStreamImpl::newEntry(const EntryInfo& info) { + if (!arch_) { +return false; + } + arch_entry_ = archive_entry_new(); + if (!arch_entry_) { +logger_->log_error("Failed to create archive entry"); +return false; + } + archive_entry_set_pathname(arch_entry_.get(), info.filename.c_str()); + archive_entry_set_size(arch_entry_.get(), info.size); + archive_entry_set_mode(arch_entry_.get(), S_IFREG | 0755); + + int result = archive_write_header(arch_.get(), arch_entry_.get()); + if (result != ARCHIVE_OK) { +logger_->log_error("Archive write header error %s", archive_error_string(arch_.get())); +return false; + } + return true; +} + +size_t WriteArchiveStreamImpl::write(const uint8_t* data, size_t len) { Review comment: It would be nice to specify a precondition here for not-null data. Not sure about requirements on len, but it would seem logical to require it to be greater than zero, or allow either both data and len to be zero, or neither. ```suggestion size_t WriteArchiveStreamImpl::write(const uint8_t* data, size_t len) { gsl_Expects(data); // maybe gsl_Expects(data && len > 0); or gsl_Expects(!data == (len == 0)); ``` ## File path: extensions/libarchive/ReadAr
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1219: MINIFICPP-1691: PutSplunkHTTP and QuerySplunkIndexingStatus
lordgamez commented on a change in pull request #1219: URL: https://github.com/apache/nifi-minifi-cpp/pull/1219#discussion_r777534222 ## File path: docker/test/integration/MiNiFi_integration_test_driver.py ## @@ -60,6 +60,12 @@ def start_kafka_broker(self): self.cluster.deploy('kafka-broker') assert self.wait_for_container_startup_to_finish('kafka-broker') +def start_splunk(self): +self.cluster.acquire_container('splunk', 'splunk') +self.cluster.deploy('splunk') +assert self.wait_for_container_startup_to_finish('splunk') +assert self.cluster.enable_hec_indexer('splunk', 'splunk_hec_token') Review comment: From the test's point of view is it necessary to start splunk before the minifi process or is it only done separately for us to be able to enable the hec indexer? In the latter case it could be possible to have the hec indexer enabling be set as part of the entrypoint of the container (like a single command starting splunk then the hec indexer, or creating a starter script) then it wouldn't be necessary to have this container started separately from all the other cluster containers. ## File path: extensions/splunk/PutSplunkHTTP.cpp ## @@ -0,0 +1,179 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + + +#include "PutSplunkHTTP.h" + +#include +#include + +#include "SplunkAttributes.h" + +#include "core/Resource.h" +#include "utils/StringUtils.h" +#include "client/HTTPClient.h" +#include "utils/HTTPClient.h" +#include "utils/TimeUtil.h" + +#include "rapidjson/document.h" + + +namespace org::apache::nifi::minifi::extensions::splunk { + +const core::Property PutSplunkHTTP::Source(core::PropertyBuilder::createProperty("Source") +->withDescription("Basic field describing the source of the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::SourceType(core::PropertyBuilder::createProperty("Source Type") +->withDescription("Basic field describing the source type of the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::Host(core::PropertyBuilder::createProperty("Host") +->withDescription("Basic field describing the host of the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::Index(core::PropertyBuilder::createProperty("Index") +->withDescription("Identifies the index where to send the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::ContentType(core::PropertyBuilder::createProperty("Content Type") +->withDescription("The media type of the event sent to Splunk. If not set, \"mime.type\" flow file attribute will be used. " + "In case of neither of them is specified, this information will not be sent to the server.") +->supportsExpressionLanguage(true)->build()); + + +const core::Relationship PutSplunkHTTP::Success("success", "FlowFiles that are sent successfully to the destination are sent to this relationship."); +const core::Relationship PutSplunkHTTP::Failure("failure", "FlowFiles that failed to send to the destination are sent to this relationship."); + +void PutSplunkHTTP::initialize() { + setSupportedRelationships({Success, Failure}); + setSupportedProperties({Hostname, Port, Token, SplunkRequestChannel, Source, SourceType, Host, Index, ContentType}); +} + +void PutSplunkHTTP::onSchedule(const std::shared_ptr& context, const std::shared_ptr& sessionFactory) { + SplunkHECProcessor::onSchedule(context, sessionFactory); +} + + +namespace { +std::optional getContentType(core::ProcessContext& context, const core::FlowFile& flow_file) { + std::optional content_type = context.getProperty(PutSplunkHTTP::ContentType); + if (content_type.has_value()) +return content_type; + return flow_file.getAttribute("mime.key"); +} + + +std::stri
[jira] [Created] (NIFI-9526) Allow for array of records to be returned by LookupRecord
Pierre Villard created NIFI-9526: Summary: Allow for array of records to be returned by LookupRecord Key: NIFI-9526 URL: https://issues.apache.org/jira/browse/NIFI-9526 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Pierre Villard In some cases, some lookup services may be able to return multiple records for one set of lookup values. It's currently not possible to return all values, we return only one (Database lookup, Kudu lookup, Mongo lookup, etc). We should provide the option to return an array of records as the result of the lookup. It may be required to expose a specific property for the controller service to allow for multiple records to be returned or not. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi-minifi-cpp] lordgamez opened a new pull request #1230: MINIFICPP-1705 Upgrade and fix compilation if OpenCV
lordgamez opened a new pull request #1230: URL: https://github.com/apache/nifi-minifi-cpp/pull/1230 Loading the OpenCV extension failed due to zlib library not linked to opencv-core library. Upgraded OpenCV to the latest version 4.5.5 and linked our bundled zlib to the opencv-core lib. https://issues.apache.org/jira/browse/MINIFICPP-1705 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (MINIFICPP-1706) Rework script engine management in ExecuteScript processor
[ https://issues.apache.org/jira/browse/MINIFICPP-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi updated MINIFICPP-1706: - Description: In [MINIFICPP-1222|https://issues.apache.org/jira/browse/MINIFICPP-1222] an issue was raised that the script engines were not handled correctly by allowing more engines in the queue than the maximum number of concurrent tasks defined in the ExecutePythonProcessor processor. This issue is also present in the ExecuteScript processor. In the ExecutePythonProcessor processor the queue was eventually removed in https://github.com/apache/nifi-minifi-cpp/pull/1227 due the python's GIL not allowing real concurrency. In the ExecuteScript processor we should have a solution that allows concurrency with LUA engines limiting the number of engines to the maximum concurrent tasks limit, but having a single engine for Python script executions. was: In [MINIFICPP-1222|https://issues.apache.org/jira/browse/MINIFICPP-1222] an issue was raised that the script engines were not handled correctly by allowing more engines in the queue than the maximum number of concurrent operations in the ExecutePythonProcessor processor. This issue is also present in the ExecuteScript processor. In the ExecutePythonProcessor processor the queue was eventually removed in https://github.com/apache/nifi-minifi-cpp/pull/1227 due the python's GIL not allowing real concurrency. In the ExecuteScript processor we should have a solution that allows concurrency with LUA engines limiting the number of engines to the maximum concurrent operations limit, but having a single engine for Python script executions. > Rework script engine management in ExecuteScript processor > -- > > Key: MINIFICPP-1706 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1706 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Gábor Gyimesi >Priority: Minor > > In [MINIFICPP-1222|https://issues.apache.org/jira/browse/MINIFICPP-1222] an > issue was raised that the script engines were not handled correctly by > allowing more engines in the queue than the maximum number of concurrent > tasks defined in the ExecutePythonProcessor processor. This issue is also > present in the ExecuteScript processor. > In the ExecutePythonProcessor processor the queue was eventually removed in > https://github.com/apache/nifi-minifi-cpp/pull/1227 due the python's GIL not > allowing real concurrency. In the ExecuteScript processor we should have a > solution that allows concurrency with LUA engines limiting the number of > engines to the maximum concurrent tasks limit, but having a single engine for > Python script executions. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (NIFI-9525) RPM build does not produce working Nifi
[ https://issues.apache.org/jira/browse/NIFI-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory M. Foreman updated NIFI-9525: - Description: Maven RPM build fails to produce an operational Nifi installation. {code:bash} $ mvn -version Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537) Maven home: /opt/maven Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", family: "unix" $ mvn clean install -Prpm -DskipTests $ yum localinstall nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm $ /opt/nifi/nifi-1.15.1/bin/nifi.sh start nifi.sh: JAVA_HOME not set; results may vary Java home: NiFi home: /opt/nifi/nifi-1.15.1 Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/nifi/security/util/TlsConfiguration at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:756) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) at java.net.URLClassLoader.access$100(URLClassLoader.java:74) at java.net.URLClassLoader$1.run(URLClassLoader.java:369) at java.net.URLClassLoader$1.run(URLClassLoader.java:363) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:362) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124) at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247) at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289) Caused by: java.lang.ClassNotFoundException: org.apache.nifi.security.util.TlsConfiguration at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 15 more {code} was: Maven RPM build fails to produce an operational Nifi installation. {{{}$ mvn -version{}}}{{{}Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537){}}}{{{}Maven home: /opt/maven{}}}{{{}Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre{}}}{{{}Default locale: en_US, platform encoding: UTF-8{}}}{{{}OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", family: "unix"{}}} {{$ mvn clean install -Prpm -DskipTests}} {{$ yum localinstall nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm}} {{{}$ /opt/nifi/nifi-1.15.1/bin/nifi.sh start{}}}{{{}nifi.sh: JAVA_HOME not set; results may vary{}}} {{Java home: }}{{NiFi home: /opt/nifi/nifi-1.15.1}} {{Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf}} {{{}Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/nifi/security/util/TlsConfiguration{}}}{{{}at java.lang.ClassLoader.defineClass1(Native Method){}}}{{{}at java.lang.ClassLoader.defineClass(ClassLoader.java:756){}}}{{{}at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142){}}}{{{}at java.net.URLClassLoader.defineClass(URLClassLoader.java:473){}}}{{{}at java.net.URLClassLoader.access$100(URLClassLoader.java:74){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:369){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:363){}}}{{{}at java.security.AccessController.doPrivileged(Native Method){}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:362){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:351){}}}{{{}at org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289){}}}{{{}Caused by: java.lang.ClassNotFoundException: org.apache.nifi.security.util.TlsConfiguration{}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:387){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352){}}}{{{}at java.lang.ClassLoader
[GitHub] [nifi-minifi-cpp] lordgamez commented on pull request #1227: MINIFICPP-1222 Remove engine queue for ExecutePythonProcessor
lordgamez commented on pull request #1227: URL: https://github.com/apache/nifi-minifi-cpp/pull/1227#issuecomment-1004130214 > Looks good, but [ExecuteScript](https://github.com/apache/nifi-minifi-cpp/blob/main/extensions/script/ExecuteScript.cpp#L93) also uses engine queues (for both python and lua) so this problem should be there for ExecuteScript while in python mode. > > On the other hand lua doesnt have a global interpreter lock and ExecuteScript doesnt really allow stateful scripts, so the engine queue should theoretically work there (everything is contained in the sol::state member of LuaScriptEngine.h), but then I guess the original problem detailed in https://issues.apache.org/jira/browse/MINIFICPP-1222 is still present in ExecuteScript Lua mode. That's a good point, I think that should also be fixed with a solution that allows us to have concurrency with Lua executions with the set limit of maximum concurrent tasks, but handling python executions differently by only having a single script engine for that. I created a Jira ticket for that rework: https://issues.apache.org/jira/browse/MINIFICPP-1706 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1706) Rework script engine management in ExecuteScript processor
Gábor Gyimesi created MINIFICPP-1706: Summary: Rework script engine management in ExecuteScript processor Key: MINIFICPP-1706 URL: https://issues.apache.org/jira/browse/MINIFICPP-1706 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Gábor Gyimesi In [MINIFICPP-1222|https://issues.apache.org/jira/browse/MINIFICPP-1222] an issue was raised that the script engines were not handled correctly by allowing more engines in the queue than the maximum number of concurrent operations in the ExecutePythonProcessor processor. This issue is also present in the ExecuteScript processor. In the ExecutePythonProcessor processor the queue was eventually removed in https://github.com/apache/nifi-minifi-cpp/pull/1227 due the python's GIL not allowing real concurrency. In the ExecuteScript processor we should have a solution that allows concurrency with LUA engines limiting the number of engines to the maximum concurrent operations limit, but having a single engine for Python script executions. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (NIFI-9525) RPM build does not produce working Nifi
[ https://issues.apache.org/jira/browse/NIFI-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory M. Foreman updated NIFI-9525: - Description: Maven RPM build fails to produce an operational Nifi installation. {{{}$ mvn -version{}}}{{{}Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537){}}}{{{}Maven home: /opt/maven{}}}{{{}Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre{}}}{{{}Default locale: en_US, platform encoding: UTF-8{}}}{{{}OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", family: "unix"{}}} {{}} {{$ mvn clean install -Prpm -DskipTests}} {{}} {{$ yum localinstall nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm}} {{}} {{{}$ /opt/nifi/nifi-1.15.1/bin/nifi.sh start{}}}{{{}nifi.sh: JAVA_HOME not set; results may vary{}}} {{Java home: }}{{NiFi home: /opt/nifi/nifi-1.15.1}} {{Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf}} {{{}Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/nifi/security/util/TlsConfiguration{}}}{{{}at java.lang.ClassLoader.defineClass1(Native Method){}}}{{{}at java.lang.ClassLoader.defineClass(ClassLoader.java:756){}}}{{{}at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142){}}}{{{}at java.net.URLClassLoader.defineClass(URLClassLoader.java:473){}}}{{{}at java.net.URLClassLoader.access$100(URLClassLoader.java:74){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:369){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:363){}}}{{{}at java.security.AccessController.doPrivileged(Native Method){}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:362){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:351){}}}{{{}at org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289){}}}{{{}Caused by: java.lang.ClassNotFoundException: org.apache.nifi.security.util.TlsConfiguration{}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:387){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:351){}}}{{{}... 15 more{}}} was: Maven RPM build fails to produce an operational Nifi installation. {{{}$ mvn -version{}}}{{{}Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537){}}}{{{}Maven home: /opt/maven{}}}{{{}Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre{}}}{{{}Default locale: en_US, platform encoding: UTF-8{}}}{{{}OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", family: "unix"{}}} {{$ mvn clean install -Prpm -DskipTests}} {{$ yum localinstall nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm}} {{{}$ /opt/nifi/nifi-1.15.1/bin/nifi.sh start{}}}{{{}nifi.sh: JAVA_HOME not set; results may vary{}}} {{Java home: }}{{NiFi home: /opt/nifi/nifi-1.15.1}} {{Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf}} {{{}Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/nifi/security/util/TlsConfiguration{}}}{{{}at java.lang.ClassLoader.defineClass1(Native Method){}}}{{{}at java.lang.ClassLoader.defineClass(ClassLoader.java:756){}}}{{{}at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142){}}}{{{}at java.net.URLClassLoader.defineClass(URLClassLoader.java:473){}}}{{{}at java.net.URLClassLoader.access$100(URLClassLoader.java:74){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:369){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:363){}}}{{{}at java.security.AccessController.doPrivileged(Native Method){}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:362){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:351){}}}{{{}at org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289){}}}{{{}Caused by: java.lang.ClassNotFoundException: org.apache.nifi.security.util.TlsConfiguration{}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:387){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.mis
[jira] [Updated] (NIFI-9525) RPM build does not produce working Nifi
[ https://issues.apache.org/jira/browse/NIFI-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory M. Foreman updated NIFI-9525: - Description: Maven RPM build fails to produce an operational Nifi installation. {{{}$ mvn -version{}}}{{{}Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537){}}}{{{}Maven home: /opt/maven{}}}{{{}Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre{}}}{{{}Default locale: en_US, platform encoding: UTF-8{}}}{{{}OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", family: "unix"{}}} {{$ mvn clean install -Prpm -DskipTests}} {{$ yum localinstall nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm}} {{{}$ /opt/nifi/nifi-1.15.1/bin/nifi.sh start{}}}{{{}nifi.sh: JAVA_HOME not set; results may vary{}}} {{Java home: }}{{NiFi home: /opt/nifi/nifi-1.15.1}} {{Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf}} {{{}Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/nifi/security/util/TlsConfiguration{}}}{{{}at java.lang.ClassLoader.defineClass1(Native Method){}}}{{{}at java.lang.ClassLoader.defineClass(ClassLoader.java:756){}}}{{{}at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142){}}}{{{}at java.net.URLClassLoader.defineClass(URLClassLoader.java:473){}}}{{{}at java.net.URLClassLoader.access$100(URLClassLoader.java:74){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:369){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:363){}}}{{{}at java.security.AccessController.doPrivileged(Native Method){}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:362){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:351){}}}{{{}at org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289){}}}{{{}Caused by: java.lang.ClassNotFoundException: org.apache.nifi.security.util.TlsConfiguration{}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:387){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:351){}}}{{{}... 15 more{}}} was: Maven RPM build fails to produce an operational Nifi installation. {{{}$ mvn -version{}}}{{{}Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537){}}}{{{}Maven home: /opt/maven{}}}{{{}Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre{}}}{{{}Default locale: en_US, platform encoding: UTF-8{}}}{{{}OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", family: "unix"{}}} {{}} {{$ mvn clean install -Prpm -DskipTests}} {{}} {{$ yum localinstall nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm}} {{}} {{{}$ /opt/nifi/nifi-1.15.1/bin/nifi.sh start{}}}{{{}nifi.sh: JAVA_HOME not set; results may vary{}}} {{Java home: }}{{NiFi home: /opt/nifi/nifi-1.15.1}} {{Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf}} {{{}Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/nifi/security/util/TlsConfiguration{}}}{{{}at java.lang.ClassLoader.defineClass1(Native Method){}}}{{{}at java.lang.ClassLoader.defineClass(ClassLoader.java:756){}}}{{{}at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142){}}}{{{}at java.net.URLClassLoader.defineClass(URLClassLoader.java:473){}}}{{{}at java.net.URLClassLoader.access$100(URLClassLoader.java:74){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:369){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:363){}}}{{{}at java.security.AccessController.doPrivileged(Native Method){}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:362){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:351){}}}{{{}at org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289){}}}{{{}Caused by: java.lang.ClassNotFoundException: org.apache.nifi.security.util.TlsConfiguration{}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:387){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.mis
[jira] [Created] (NIFI-9525) RPM build does not produce working Nifi
Gregory M. Foreman created NIFI-9525: Summary: RPM build does not produce working Nifi Key: NIFI-9525 URL: https://issues.apache.org/jira/browse/NIFI-9525 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.15.1 Environment: Centos 7 Reporter: Gregory M. Foreman Maven RPM build fails to produce an operational Nifi installation. {{{}$ mvn -version{}}}{{{}Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537){}}}{{{}Maven home: /opt/maven{}}}{{{}Java version: 1.8.0_312, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre{}}}{{{}Default locale: en_US, platform encoding: UTF-8{}}}{{{}OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", family: "unix"{}}} {{$ mvn clean install -Prpm -DskipTests}} {{$ yum localinstall nifi-assembly/target/rpm/nifi-bin/RPMS/noarch/nifi-1.15.1-1.el7.noarch.rpm}} {{{}$ /opt/nifi/nifi-1.15.1/bin/nifi.sh start{}}}{{{}nifi.sh: JAVA_HOME not set; results may vary{}}} {{Java home: }}{{NiFi home: /opt/nifi/nifi-1.15.1}} {{Bootstrap Config File: /opt/nifi/nifi-1.15.1/conf/bootstrap.conf}} {{{}Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/nifi/security/util/TlsConfiguration{}}}{{{}at java.lang.ClassLoader.defineClass1(Native Method){}}}{{{}at java.lang.ClassLoader.defineClass(ClassLoader.java:756){}}}{{{}at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142){}}}{{{}at java.net.URLClassLoader.defineClass(URLClassLoader.java:473){}}}{{{}at java.net.URLClassLoader.access$100(URLClassLoader.java:74){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:369){}}}{{{}at java.net.URLClassLoader$1.run(URLClassLoader.java:363){}}}{{{}at java.security.AccessController.doPrivileged(Native Method){}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:362){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:351){}}}{{{}at org.apache.nifi.bootstrap.util.SecureNiFiConfigUtil.configureSecureNiFiProperties(SecureNiFiConfigUtil.java:124){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1247){}}}{{{}at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:289){}}}{{{}Caused by: java.lang.ClassNotFoundException: org.apache.nifi.security.util.TlsConfiguration{}}}{{{}at java.net.URLClassLoader.findClass(URLClassLoader.java:387){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:418){}}}{{{}at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352){}}}{{{}at java.lang.ClassLoader.loadClass(ClassLoader.java:351){}}}{{{}... 15 more{}}} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (NIFI-9524) Build is failing because missing commons-logging excludes
Zoltán Kornél Török created NIFI-9524: - Summary: Build is failing because missing commons-logging excludes Key: NIFI-9524 URL: https://issues.apache.org/jira/browse/NIFI-9524 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.16.0, 1.15.2 Reporter: Zoltán Kornél Török If you use the following profile to build, than the build is failing because banned commons-logging dependency is included: {code:java} mvn install -DskipTests -Pinclude-ranger,include-atlas,include-hive3,include-rules,include-sql-reporting,include-hadoop-aws,include-hadoop-azure,include-hadoop-cloud-storage,include-hadoop-gcp,include-graph,include-grpc,include-accumulo,include-hadoop-ozone,include-asn1,include-aws{code} It is needed to be excluded from some other places too. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (NIFI-9524) Build is failing because missing commons-logging excludes
[ https://issues.apache.org/jira/browse/NIFI-9524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltán Kornél Török reassigned NIFI-9524: - Assignee: Zoltán Kornél Török > Build is failing because missing commons-logging excludes > - > > Key: NIFI-9524 > URL: https://issues.apache.org/jira/browse/NIFI-9524 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.16.0, 1.15.2 >Reporter: Zoltán Kornél Török >Assignee: Zoltán Kornél Török >Priority: Major > > If you use the following profile to build, than the build is failing because > banned commons-logging dependency is included: > {code:java} > mvn install -DskipTests > -Pinclude-ranger,include-atlas,include-hive3,include-rules,include-sql-reporting,include-hadoop-aws,include-hadoop-azure,include-hadoop-cloud-storage,include-hadoop-gcp,include-graph,include-grpc,include-accumulo,include-hadoop-ozone,include-asn1,include-aws{code} > It is needed to be excluded from some other places too. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi-minifi-cpp] martinzink commented on pull request #1227: MINIFICPP-1222 Remove engine queue for ExecutePythonProcessor
martinzink commented on pull request #1227: URL: https://github.com/apache/nifi-minifi-cpp/pull/1227#issuecomment-1004107963 Looks good, but [ExecuteScript](https://github.com/apache/nifi-minifi-cpp/blob/main/extensions/script/ExecuteScript.cpp#L93) also uses engine queues (for both python and lua) so this problem should be there for ExecuteScript while in python mode. On the other hand lua doesnt have a global interpreter lock and ExecuteScript doesnt really allow stateful scripts, so the engine queue should theoretically work there (everything is contained in the sol::state member of LuaScriptEngine.h), but then I guess the original problem detailed in https://issues.apache.org/jira/browse/MINIFICPP-1222 is still present in ExecuteScript Lua mode. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-9506) Nifi reconnects with websocket server each second
[ https://issues.apache.org/jira/browse/NIFI-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lehel Boér updated NIFI-9506: - Component/s: Extensions (was: Core Framework) > Nifi reconnects with websocket server each second > - > > Key: NIFI-9506 > URL: https://issues.apache.org/jira/browse/NIFI-9506 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.15.1 > Environment: official nifi container > (https://github.com/apache/nifi/blob/main/nifi-docker/dockerhub/Dockerfile) > running with podman 2.2.1 or 3.0.1, but based on a openjdk-11-jre base layer. >Reporter: Fabian Reiber >Priority: Major > Attachments: nifi_websocket.png > > > I have a python application which offers a websocket to apache nifi. Nifi > uses the ConnectWebSocket to connect to the sever as a client. After an > upgrade from 1.13.2 to 1.15.1 it does not work anymore. Respectively nifi > 1.15.1 reconnects all the time to the websocket application. With version > 1.13.2 nifi connects once with the websocket and everything is fine. So, I > guess my default setup with podman which runs a nifi, redis and python > container in one pod, does not relate with the bug, because the same setup > with the same flow work with the current nifi version I am using. > To simplify the setup and to reproduce this issue, I used this python > application with [websockets|https://github.com/aaugustin/websockets] in a > container: > {code:python} > import asyncio > import logging > from websockets import serve > ws_logger = logging.getLogger('websockets') > ws_logger.setLevel(logging.DEBUG) > ws_logger.addHandler(logging.StreamHandler()) > async def echo(websocket, path): > async for message in websocket: > await websocket.send(message) > async def main(): > async with serve(echo, 'localhost', 8761): > await asyncio.Future() # run forever > asyncio.run(main()) > {code} > > With nifi 1.14.0 the server logs : > {code:java} > server listening on [::1]:8761 > server listening on 127.0.0.1:8761 > = connection is CONNECTING > < GET /foobar HTTP/1.1 > < Accept-Encoding: gzip > < User-Agent: Jetty/9.4.42.v20210604 > < Upgrade: websocket > < Connection: Upgrade > < Sec-WebSocket-Key: izuSbowZLyZvfon4HgAzRQ== > < Sec-WebSocket-Version: 13 > < Pragma: no-cache > < Cache-Control: no-cache > < Host: 127.0.0.1:8761 > > HTTP/1.1 101 Switching Protocols > > Upgrade: websocket > > Connection: Upgrade > > Sec-WebSocket-Accept: Npa81PCNknQPE65lvzGnHYCzMoo= > > Date: Mon, 20 Dec 2021 12:56:00 GMT > > Server: Python/3.7 websockets/10.1 > connection open > = connection is OPEN > % sending keepalive ping > > PING a3 38 b7 85 [binary, 4 bytes] > < PONG a3 38 b7 85 [binary, 4 bytes] > % received keepalive pong > % sending keepalive ping > > PING 4a 5e cd 02 [binary, 4 bytes] > < PONG 4a 5e cd 02 [binary, 4 bytes] > % received keepalive pong > % sending keepalive ping > > PING eb 90 17 a2 [binary, 4 bytes] > < PONG eb 90 17 a2 [binary, 4 bytes] > % received keepalive pong > % sending keepalive ping > > PING 97 b3 1d 6f [binary, 4 bytes] > < PONG 97 b3 1d 6f [binary, 4 bytes] > % received keepalive pong > % sending keepalive ping > > PING 29 86 3c dc [binary, 4 bytes] > < PONG 29 86 3c dc [binary, 4 bytes] > % received keepalive pong > {code} > While the client logs: > {code:java} > INFO [Timer-Driven Process Thread-5] o.a.n.w.jetty.JettyWebSocketClient > JettyWebSocketClient[id=a7bd40ce-3fab-3c7e-c391-a66122576fc7] Connecting to : > ws://127.0.0.1:8761/foobar > INFO [Timer-Driven Process Thread-5] o.a.n.w.jetty.JettyWebSocketClient > JettyWebSocketClient[id=a7bd40ce-3fab-3c7e-c391-a66122576fc7] Connected, > session=WebSocketSession[websocket=JettyListenerEventDriver[org.apache.nifi.websocket.jetty.RoutingWebSocketListener],behavior=CLIENT,connection=WebSocketClientConnection@aad940f::SocketChannelEndPoint@4d2b432c{l=/127.0.0.1:40962,r=/127.0.0.1:8761,OPEN,fill=FI,flush=-,to=2/30}{io=1/1,kio=1,kro=1}->WebSocketClientConnection@aad940f[s=ConnectionState@26d0eddf[OPENED],f=Flusher@6b69d9be[IDLE][queueSize=0,aggregateSize=-1,terminated=null],g=Generator[CLIENT,validating],p=Parser@453aa6fb[ExtensionStack,s=START,c=0,len=0,f=null]],remote=WebSocketRemoteEndpoint@529faf53[batching=true],incoming=JettyListenerEventDriver[org.apache.nifi.websocket.jetty.RoutingWebSocketListener],outgoing=ExtensionStack[queueSize=0,extensions=[],incoming=org.eclipse.jetty.websocket.common.WebSocketSession,outgoing=org.eclipse.jetty.websocket.client.io.WebSocketClientConnection]] > {code} > With nifi 1.15.1 the server logs: > {code:java} > server listening on [::1]:8761 > server listening on 127.0.0.1:8761 > = connection is CONNECTING > < GET /foorbar HTTP/1.1 > < Accept-Encoding: gzip > < User-Agent
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1219: MINIFICPP-1691: PutSplunkHTTP and QuerySplunkIndexingStatus
szaszm commented on a change in pull request #1219: URL: https://github.com/apache/nifi-minifi-cpp/pull/1219#discussion_r777471185 ## File path: docker/test/integration/minifi/core/SplunkContainer.py ## @@ -0,0 +1,26 @@ +import logging +from .Container import Container + + +class SplunkContainer(Container): +def __init__(self, name, vols, network, image_store): +super().__init__(name, 'splunk', vols, network, image_store) + +def get_startup_finished_log_entry(self): +return "Ansible playbook complete, will begin streaming splunkd_stderr.log" + +def deploy(self): +if not self.set_deployed(): +return + +logging.info('Creating and running Splunk docker container...') +self.client.containers.run( +self.image_store.get_image(self.get_engine()), +detach=True, +name=self.name, +network=self.network.name, +environment=[ +"SPLUNK_START_ARGS=--accept-license", Review comment: This is non-free software, but I think it's fine as long as we are not shipping any of it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1219: MINIFICPP-1691: PutSplunkHTTP and QuerySplunkIndexingStatus
martinzink commented on a change in pull request #1219: URL: https://github.com/apache/nifi-minifi-cpp/pull/1219#discussion_r777462139 ## File path: extensions/splunk/QuerySplunkIndexingStatus.cpp ## @@ -0,0 +1,194 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + + +#include "QuerySplunkIndexingStatus.h" + +#include +#include + +#include "SplunkAttributes.h" + +#include "core/Resource.h" +#include "client/HTTPClient.h" +#include "utils/HTTPClient.h" +#include "utils/TimeUtil.h" + +#include "rapidjson/document.h" +#include "rapidjson/stringbuffer.h" +#include "rapidjson/writer.h" + +namespace org::apache::nifi::minifi::extensions::splunk { + +const core::Property QuerySplunkIndexingStatus::MaximumWaitingTime(core::PropertyBuilder::createProperty("Maximum Waiting Time") +->withDescription("The maximum time the processor tries to acquire acknowledgement confirmation for an index, from the point of registration. " + "After the given amount of time, the processor considers the index as not acknowledged and transfers the FlowFile to the \"unacknowledged\" relationship.") +->withDefaultValue("1 hour")->isRequired(true)->build()); + +const core::Property QuerySplunkIndexingStatus::MaxQuerySize(core::PropertyBuilder::createProperty("Maximum Query Size") +->withDescription("The maximum number of acknowledgement identifiers the outgoing query contains in one batch. " + "It is recommended not to set it too low in order to reduce network communication.") +->withDefaultValue("1000")->isRequired(true)->build()); + +const core::Relationship QuerySplunkIndexingStatus::Acknowledged("acknowledged", +"A FlowFile is transferred to this relationship when the acknowledgement was successful."); + +const core::Relationship QuerySplunkIndexingStatus::Unacknowledged("unacknowledged", +"A FlowFile is transferred to this relationship when the acknowledgement was not successful. " +"This can happen when the acknowledgement did not happened within the time period set for Maximum Waiting Time. " +"FlowFiles with acknowledgement id unknown for the Splunk server will be transferred to this relationship after the Maximum Waiting Time is reached."); + +const core::Relationship QuerySplunkIndexingStatus::Undetermined("undetermined", +"A FlowFile is transferred to this relationship when the acknowledgement state is not determined. " +"FlowFiles transferred to this relationship might be penalized. " +"This happens when Splunk returns with HTTP 200 but with false response for the acknowledgement id in the flow file attribute."); + +const core::Relationship QuerySplunkIndexingStatus::Failure("failure", +"A FlowFile is transferred to this relationship when the acknowledgement was not successful due to errors during the communication, " +"or if the flowfile was missing the acknowledgement id"); + +void QuerySplunkIndexingStatus::initialize() { + SplunkHECProcessor::initialize(); + setSupportedRelationships({Acknowledged, Unacknowledged, Undetermined, Failure}); + updateSupportedProperties({MaximumWaitingTime, MaxQuerySize}); +} Review comment: Changed it in https://github.com/apache/nifi-minifi-cpp/pull/1219/commits/25e4878262c6ec80237339cd7388e7caab830ea0#diff-a2db2ff59dd1ebf5f1e3e053781c55708df909e3009ee21015da3fe04218def4R66 https://github.com/apache/nifi-minifi-cpp/pull/1219/commits/25e4878262c6ec80237339cd7388e7caab830ea0#diff-2633ef573b024e894869a6a974a55671c3468db3eff99e4cdc646a081a700efdR64 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1219: MINIFICPP-1691: PutSplunkHTTP and QuerySplunkIndexingStatus
martinzink commented on a change in pull request #1219: URL: https://github.com/apache/nifi-minifi-cpp/pull/1219#discussion_r777461218 ## File path: extensions/splunk/PutSplunkHTTP.cpp ## @@ -0,0 +1,180 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + + +#include "PutSplunkHTTP.h" + +#include +#include + +#include "SplunkAttributes.h" + +#include "core/Resource.h" +#include "utils/StringUtils.h" +#include "client/HTTPClient.h" +#include "utils/HTTPClient.h" +#include "utils/TimeUtil.h" + +#include "rapidjson/document.h" + + +namespace org::apache::nifi::minifi::extensions::splunk { + +const core::Property PutSplunkHTTP::Source(core::PropertyBuilder::createProperty("Source") +->withDescription("Basic field describing the source of the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::SourceType(core::PropertyBuilder::createProperty("Source Type") +->withDescription("Basic field describing the source type of the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::Host(core::PropertyBuilder::createProperty("Host") +->withDescription("Basic field describing the host of the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::Index(core::PropertyBuilder::createProperty("Index") +->withDescription("Identifies the index where to send the event. If unspecified, the event will use the default defined in splunk.") +->supportsExpressionLanguage(true)->build()); + +const core::Property PutSplunkHTTP::ContentType(core::PropertyBuilder::createProperty("Content Type") +->withDescription("The media type of the event sent to Splunk. If not set, \"mime.type\" flow file attribute will be used. " + "In case of neither of them is specified, this information will not be sent to the server.") +->supportsExpressionLanguage(true)->build()); + + +const core::Relationship PutSplunkHTTP::Success("success", "FlowFiles that are sent successfully to the destination are sent to this relationship."); +const core::Relationship PutSplunkHTTP::Failure("failure", "FlowFiles that failed to send to the destination are sent to this relationship."); + +void PutSplunkHTTP::initialize() { + SplunkHECProcessor::initialize(); + setSupportedRelationships({Success, Failure}); + updateSupportedProperties({Source, SourceType, Host, Index, ContentType}); +} + +void PutSplunkHTTP::onSchedule(const std::shared_ptr& context, const std::shared_ptr& sessionFactory) { + SplunkHECProcessor::onSchedule(context, sessionFactory); +} + + +namespace { +std::optional getContentType(core::ProcessContext& context, const gsl::not_null>& flow_file) { + std::optional content_type = context.getProperty(PutSplunkHTTP::ContentType); + if (content_type.has_value()) +return content_type; + return flow_file->getAttribute("mime.key"); +} + + +std::string getEndpoint(core::ProcessContext& context, const gsl::not_null>& flow_file) { + std::stringstream endpoint; + endpoint << "/services/collector/raw"; + std::vector parameters; + std::string prop_value; + if (context.getProperty(PutSplunkHTTP::SourceType, prop_value, flow_file)) { +parameters.push_back("sourcetype=" + prop_value); + } + if (context.getProperty(PutSplunkHTTP::Source, prop_value, flow_file)) { +parameters.push_back("source=" + prop_value); + } + if (context.getProperty(PutSplunkHTTP::Host, prop_value, flow_file)) { +parameters.push_back("host=" + prop_value); + } + if (context.getProperty(PutSplunkHTTP::Index, prop_value, flow_file)) { +parameters.push_back("index=" + prop_value); + } + if (!parameters.empty()) { +endpoint << "?" << utils::StringUtils::join("&", parameters); + } + return endpoint.str(); +} + +bool addAttributesFromClientResponse(core::FlowFile& flow_file, utils::HTTPClient& client) { + rapidjson::Document response_json; + rapidjson::ParseResult parse_result = response_js
[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1219: MINIFICPP-1691: PutSplunkHTTP and QuerySplunkIndexingStatus
martinzink commented on a change in pull request #1219: URL: https://github.com/apache/nifi-minifi-cpp/pull/1219#discussion_r777461103 ## File path: libminifi/include/utils/TimeUtil.h ## @@ -37,6 +37,24 @@ namespace minifi { namespace utils { namespace timeutils { +/** + * Converts the time point to the elapsed time since epoch + * @returns TimeUnit since epoch + */ +template +uint64_t getTimeStamp(const TimePoint& time_point) { Review comment: nice catch, changed this and the usages in https://github.com/apache/nifi-minifi-cpp/pull/1219/commits/25e4878262c6ec80237339cd7388e7caab830ea0#diff-4a76905d55704437ae0a7a4ee434a73f057c105c97bdf8756f5747b4fbc65e9fR45 ## File path: extensions/splunk/SplunkHECProcessor.cpp ## @@ -0,0 +1,81 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include "SplunkHECProcessor.h" +#include "client/HTTPClient.h" +#include "utils/HTTPClient.h" + +namespace org::apache::nifi::minifi::extensions::splunk { + +const core::Property SplunkHECProcessor::Hostname(core::PropertyBuilder::createProperty("Hostname") +->withDescription("The ip address or hostname of the Splunk server.") +->isRequired(true)->build()); + +const core::Property SplunkHECProcessor::Port(core::PropertyBuilder::createProperty("Port") +->withDescription("The HTTP Event Collector HTTP Port Number.") +->withDefaultValue("8088")->isRequired(true)->build()); + +const core::Property SplunkHECProcessor::Token(core::PropertyBuilder::createProperty("Token") +->withDescription("HTTP Event Collector token starting with the string Splunk. For example \'Splunk 1234578-abcd-1234-abcd-1234abcd\'") +->isRequired(true)->build()); + +const core::Property SplunkHECProcessor::SplunkRequestChannel(core::PropertyBuilder::createProperty("Splunk Request Channel") +->withDescription("Identifier of the used request channel.")->isRequired(true)->build()); + +const core::Property SplunkHECProcessor::SSLContext(core::PropertyBuilder::createProperty("SSL Context Service") +->withDescription("The SSL Context Service used to provide client certificate " + "information for TLS/SSL (https) connections.") +->isRequired(false)->withExclusiveProperty("Remote URL", "^http:.*$") +->asType()->build()); + +void SplunkHECProcessor::initialize() { + setSupportedProperties({Hostname, Port, Token, SplunkRequestChannel}); +} + +void SplunkHECProcessor::onSchedule(const std::shared_ptr& context, const std::shared_ptr&) { + gsl_Expects(context); + if (!context->getProperty(Hostname.getName(), hostname_)) +throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Failed to get Hostname"); + + if (!context->getProperty(Port.getName(), port_)) +throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Failed to get Port"); + + if (!context->getProperty(Token.getName(), token_)) +throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Failed to get Token"); + + if (!context->getProperty(SplunkRequestChannel.getName(), request_channel_)) +throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Failed to get SplunkRequestChannel"); +} + +std::string SplunkHECProcessor::getUrl() const { + return hostname_ + ":" + port_; Review comment: changed it in https://github.com/apache/nifi-minifi-cpp/pull/1219/commits/25e4878262c6ec80237339cd7388e7caab830ea0#diff-fffe34ee91301ca41f9e7bc23592fde7ae3205eb7e6e7c0c4e796694b5babac6R56 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] martinzink commented on a change in pull request #1219: MINIFICPP-1691: PutSplunkHTTP and QuerySplunkIndexingStatus
martinzink commented on a change in pull request #1219: URL: https://github.com/apache/nifi-minifi-cpp/pull/1219#discussion_r777461024 ## File path: docker/test/integration/minifi/core/SplunkContainer.py ## @@ -0,0 +1,26 @@ +import logging +from .Container import Container + + +class SplunkContainer(Container): +def __init__(self, name, vols, network, image_store): +super().__init__(name, 'splunk', vols, network, image_store) + +def get_startup_finished_log_entry(self): +return "Ansible playbook complete, will begin streaming splunkd_stderr.log" + +def deploy(self): +if not self.set_deployed(): +return + +logging.info('Creating and running Splunk docker container...') +self.client.containers.run( +self.image_store.get_image(self.get_engine()), +detach=True, +name=self.name, +network=self.network.name, +environment=[ +"SPLUNK_START_ARGS=--accept-license", Review comment: They have different license terms based on the type we are using. https://docs.splunk.com/Documentation/Splunk/8.2.4/Admin/TypesofSplunklicenses The whole eula can be found here: https://www.splunk.com/eula/sii/1.4 I changed this section in https://github.com/apache/nifi-minifi-cpp/pull/1219/commits/b79afdea234a890818fa39c6a5ded4fb5fc117d8 so we explicity accept the free license which can be used to for these kinds of tests(we only use this container for the integration tests) more about it here: https://docs.splunk.com/Documentation/Splunk/8.2.4/Admin/TypesofSplunklicenses#Free_license ## File path: extensions/splunk/PutSplunkHTTP.h ## @@ -0,0 +1,54 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#pragma once + +#include +#include + +#include "SplunkHECProcessor.h" +#include "utils/gsl.h" + +namespace org::apache::nifi::minifi::extensions::splunk { + +class PutSplunkHTTP final : public SplunkHECProcessor { Review comment: fixed in https://github.com/apache/nifi-minifi-cpp/pull/1219/commits/25e4878262c6ec80237339cd7388e7caab830ea0#diff-fffe34ee91301ca41f9e7bc23592fde7ae3205eb7e6e7c0c4e796694b5babac6R51-R54 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni closed pull request #1228: MINIFICPP-1703 ExecuteScript error handling fix, more docs and tests
adamdebreceni closed pull request #1228: URL: https://github.com/apache/nifi-minifi-cpp/pull/1228 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1705) Error while loading OpenCV extension
Gábor Gyimesi created MINIFICPP-1705: Summary: Error while loading OpenCV extension Key: MINIFICPP-1705 URL: https://issues.apache.org/jira/browse/MINIFICPP-1705 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Reporter: Gábor Gyimesi Assignee: Gábor Gyimesi OpenCV error cannot be dynamically loaded. The following error occurs when starting MiNiFi: {code} [2021-11-29 14:15:01.577] [org::apache::nifi::minifi::core::extension::DynamicLibrary] [error] Failed to load extension 'minifi-opencv' at '/home/ggyimesi/temp/test_log_error/nifi-minifi-cpp-0.11.0/bin/../extensions/libminifi-opencv.so': /home/ggyimesi/temp/test_log_error/nifi-minifi-cpp-0.11.0/bin/../extensions/libminifi-opencv.so: undefined symbol: gzgets {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #1229: MINIFICPP-1704 - Update version number to 0.12.0
adamdebreceni opened a new pull request #1229: URL: https://github.com/apache/nifi-minifi-cpp/pull/1229 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (MINIFICPP-1704) Update version number to 0.12.0
[ https://issues.apache.org/jira/browse/MINIFICPP-1704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Debreceni updated MINIFICPP-1704: -- Summary: Update version number to 0.12.0 (was: Bump version number to 0.12.0) > Update version number to 0.12.0 > --- > > Key: MINIFICPP-1704 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1704 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Reporter: Adam Debreceni >Assignee: Adam Debreceni >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (MINIFICPP-1704) Bump version number to 0.12.0
Adam Debreceni created MINIFICPP-1704: - Summary: Bump version number to 0.12.0 Key: MINIFICPP-1704 URL: https://issues.apache.org/jira/browse/MINIFICPP-1704 Project: Apache NiFi MiNiFi C++ Issue Type: Task Reporter: Adam Debreceni Assignee: Adam Debreceni -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (MINIFICPP-1223) Stop reloading script files every time ExecutePythonProcessor is triggered
[ https://issues.apache.org/jira/browse/MINIFICPP-1223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi updated MINIFICPP-1223: - Fix Version/s: 0.12.0 (was: 0.11.0) > Stop reloading script files every time ExecutePythonProcessor is triggered > -- > > Key: MINIFICPP-1223 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1223 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Adam Hunyadi >Assignee: Gábor Gyimesi >Priority: Minor > Fix For: 1.0.0, 0.12.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > *Acceptance criteria:* > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" not > set) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should not change > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" > disabled) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should not change > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" > enabled) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should follow the updated script > *Background:* > For backward compatibility, we went for keeping the behaviour of reading the > script file every time the processor is triggered intact. > *Proposal:* > We would like to add an option called *"Reload on Script Change"* to toggle > this with the first major release. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (MINIFICPP-1695) Fix execution of native python processors
[ https://issues.apache.org/jira/browse/MINIFICPP-1695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi updated MINIFICPP-1695: - Fix Version/s: 0.12.0 (was: 0.11.0) > Fix execution of native python processors > - > > Key: MINIFICPP-1695 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1695 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gábor Gyimesi >Assignee: Gábor Gyimesi >Priority: Minor > Fix For: 0.12.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > Make sure native python processors and also ExecutePythonProcessor can be > individually executed. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (MINIFICPP-1694) Remove false positive error log messages
[ https://issues.apache.org/jira/browse/MINIFICPP-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi updated MINIFICPP-1694: - Fix Version/s: 0.12.0 > Remove false positive error log messages > > > Key: MINIFICPP-1694 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1694 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gábor Gyimesi >Assignee: Gábor Gyimesi >Priority: Minor > Fix For: 0.12.0 > > Time Spent: 2h 40m > Remaining Estimate: 0h > > There are some error messages in MiNiFi that do not actually signal a real > error. There were instances when users observed these errors while debugging > a separate issue and they were suspicious that these might be related so > these messages can be quite misleading. Most common of these can be observed > at startup: > * Happens when "type" field is used instead of "class" which is also > acceptable: > {code:java} > [2021-11-29 13:43:52.892] > [org::apache::nifi::minifi::core::YamlConfiguration] [error] Unable to parse > configuration file for component named 'AzureStorageCredentialsService' as > required field 'class' is missing [in 'Controller Services' section of > configuration file] [line:column, pos at 52:2, 1660] {code} > * This should probably be a warning if no configuration file is defined and > the default is not available: > {code:java} > [2021-11-29 13:43:52.898] [org::apache::nifi::minifi::Properties] [error] > load configure file failed{code} > * When component state not available anymore this can be also ignored, so it > should not be an error: > {code:java} > [2021-11-29 13:43:52.919] > [org::apache::nifi::minifi::controllers::RocksDbPersistableKeyValueStoreService] > [error] Failed to Get key 94b8e610-b4ed-1ec9-b26f-c839931bf3e2 from RocksDB > database at corecomponentstate, error: (null){code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (MINIFICPP-1355) Investigate and fix the initialization of ExecutePythonProcessor
[ https://issues.apache.org/jira/browse/MINIFICPP-1355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi updated MINIFICPP-1355: - Fix Version/s: 0.12.0 (was: 0.11.0) > Investigate and fix the initialization of ExecutePythonProcessor > > > Key: MINIFICPP-1355 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1355 > Project: Apache NiFi MiNiFi C++ > Issue Type: Task >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Gábor Gyimesi >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > Fix For: 1.0.0, 0.12.0 > > Attachments: Screenshot 2020-09-04 at 16.02.41.png > > Time Spent: 1h 10m > Remaining Estimate: 0h > > *Acceptance criteria:* > - GIVEN a flow set up as highlighted in blue below > - WHEN the flow is with a python script set to add a new attribute to a flow > file > - THEN no error is produced and the newly added attribute is logged in > LogAttribute > {code:c++|title=Example script} > def describe(processor): > processor.setDescription("Adds an attribute to your flow files") > def onInitialize(processor): > processor.setSupportsDynamicProperties() > def onTrigger(context, session): > flow_file = session.get() > if flow_file is not None: > flow_file.addAttribute("Python attribute","attributevalue") > session.transfer(flow_file, REL_SUCCESS) > {code} > *Background:* > Currently, even though the tests for ExecutePythonProcessor are passing, if I > were to try and load up a configuration that contains an > ExecutePythonProcessor, it fails due to trying to load an incorrect script > file. > Sample flow: > {color:#0747a6}GenerateFlowFile -(success)-> ExecutePythonProcessor > -(success,failure)-> LogAttribute{color} > When trying to check in debugger, it seems like the processors script file is > always replaced with an incorrect one, and the processor fails to start. > This is how it is set: > {code:c++|title=Trace of where the property is overridden} > ConfigurableComponent::setProperty() > std::shared_ptr create() > ClassLoader::instantiate() > PythonCreator::configure() <- here the first element of classpaths_ is read > to overwrite the config > FlowController::initializeExternalComponents() > {code} > When trying to perform the same thing on the 0.7.0 release version, the > startup already shows some kind of errors, although they seem different: > {code:python|title=Error log} > [2020-09-04 15:49:53.424] > [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] > [error] Caught Exception ModuleNotFoundError: No module named 'google' > At: > > /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): > > [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] > [warning] Cannot load SentimentAnalyzer because of ModuleNotFoundError: No > module named 'google' > At: > > /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//google/SentimentAnalyzer.py(28): > > [2020-09-04 15:49:53.424] > [org::apache::nifi::minifi::python::processors::ExecutePythonProcessor] > [error] Caught Exception ModuleNotFoundError: No module named 'vaderSentiment' > At: > > /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): > > [2020-09-04 15:49:53.424] [org::apache::nifi::minifi::python::PythonCreator] > [warning] Cannot load SentimentAnalysis because of ModuleNotFoundError: No > module named 'vaderSentiment' > At: > > /Users/adamhunyadi/Documents/Projects/integration_tests/minifi_agent_02/build/nifi-minifi-cpp-0.7.0/minifi-python//examples/SentimentAnalysis.py(17): > > {code} > *Proposal:* > One should investigate and fix the error. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (MINIFICPP-1223) Stop reloading script files every time ExecutePythonProcessor is triggered
[ https://issues.apache.org/jira/browse/MINIFICPP-1223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi resolved MINIFICPP-1223. -- Fix Version/s: 0.11.0 Resolution: Fixed > Stop reloading script files every time ExecutePythonProcessor is triggered > -- > > Key: MINIFICPP-1223 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1223 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Adam Hunyadi >Assignee: Gábor Gyimesi >Priority: Minor > Fix For: 1.0.0, 0.11.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > *Acceptance criteria:* > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" not > set) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should not change > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" > disabled) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should not change > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" > enabled) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should follow the updated script > *Background:* > For backward compatibility, we went for keeping the behaviour of reading the > script file every time the processor is triggered intact. > *Proposal:* > We would like to add an option called *"Reload on Script Change"* to toggle > this with the first major release. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (MINIFICPP-1695) Fix execution of native python processors
[ https://issues.apache.org/jira/browse/MINIFICPP-1695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi updated MINIFICPP-1695: - Fix Version/s: 0.11.0 > Fix execution of native python processors > - > > Key: MINIFICPP-1695 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1695 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gábor Gyimesi >Assignee: Gábor Gyimesi >Priority: Minor > Fix For: 0.11.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > Make sure native python processors and also ExecutePythonProcessor can be > individually executed. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (MINIFICPP-1695) Fix execution of native python processors
[ https://issues.apache.org/jira/browse/MINIFICPP-1695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi resolved MINIFICPP-1695. -- Resolution: Fixed > Fix execution of native python processors > - > > Key: MINIFICPP-1695 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1695 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Gábor Gyimesi >Assignee: Gábor Gyimesi >Priority: Minor > Time Spent: 1h 50m > Remaining Estimate: 0h > > Make sure native python processors and also ExecutePythonProcessor can be > individually executed. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] lordgamez opened a new pull request #5312: NIFI-9058 Core attributes shall not be filtered from Attributes List
lordgamez opened a new pull request #5312: URL: https://github.com/apache/nifi/pull/5312 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR Currently "Include Core Attributes" property filters out any core attributes specified in the "Attributes List" property, but does not filter any attributes matching the regular expression in the "Attributes Regular Expression" property. This makes the functionality of the processor inconsistent. "Include Core Attributes" should not filter out specifically listed attributes in the "Attributes List" property, and this PR removes this filtering. https://issues.apache.org/jira/browse/NIFI-9058 In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [X] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [X] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [X] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [X] Have you verified that the full build is successful on JDK 11? - [X] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [X] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [X] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [X] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [X] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org