[jira] [Updated] (NIFI-9662) End of life for mail-1.4.7.jar used in nifi-framework-bundle
[ https://issues.apache.org/jira/browse/NIFI-9662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrinda Palod updated NIFI-9662: --- Description: What is the plan for upgrading mail-1.4.7.jar jar file used inside nifi-nar-bundles/nifi-framework-bundle/pom.xml. The current Nifi version has moved to Jakarta mail but this one is not upgraded yet. [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-framework-bundle/pom.xml] was: What is the plan for upgrading mail-1.4.7.jar jar file used inside nifi-registry/nifi-registry-extensions/nifi-registry-ranger/nifi-registry-ranger-plugin/pom.xml. The current Nifi version has moved to Jakarta mail but this one is not upgraded yet. [https://github.com/apache/nifi/blob/main/nifi-registry/nifi-registry-extensions/nifi-registry-ranger/nifi-registry-ranger-plugin/pom.xml] > End of life for mail-1.4.7.jar used in nifi-framework-bundle > > > Key: NIFI-9662 > URL: https://issues.apache.org/jira/browse/NIFI-9662 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.16.0 >Reporter: Vrinda Palod >Priority: Major > Labels: dependencies > > What is the plan for upgrading mail-1.4.7.jar jar file used inside > nifi-nar-bundles/nifi-framework-bundle/pom.xml. > The current Nifi version has moved to Jakarta mail but this one is not > upgraded yet. > [https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-framework-bundle/pom.xml] > > > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (NIFI-9662) End of life for mail-1.4.7.jar used in nifi-framework-bundle
Vrinda Palod created NIFI-9662: -- Summary: End of life for mail-1.4.7.jar used in nifi-framework-bundle Key: NIFI-9662 URL: https://issues.apache.org/jira/browse/NIFI-9662 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Affects Versions: 1.16.0 Reporter: Vrinda Palod What is the plan for upgrading mail-1.4.7.jar jar file used inside nifi-registry/nifi-registry-extensions/nifi-registry-ranger/nifi-registry-ranger-plugin/pom.xml. The current Nifi version has moved to Jakarta mail but this one is not upgraded yet. [https://github.com/apache/nifi/blob/main/nifi-registry/nifi-registry-extensions/nifi-registry-ranger/nifi-registry-ranger-plugin/pom.xml] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (NIFI-9661) End of life for mail-1.4.7.jar used in Nifi Registry ranger plugin
Vrinda Palod created NIFI-9661: -- Summary: End of life for mail-1.4.7.jar used in Nifi Registry ranger plugin Key: NIFI-9661 URL: https://issues.apache.org/jira/browse/NIFI-9661 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Affects Versions: 1.16.0 Reporter: Vrinda Palod What is the plan for upgrading mail-1.4.7.jar jar file used inside nifi-registry/nifi-registry-extensions/nifi-registry-ranger/nifi-registry-ranger-plugin/pom.xml. The current Nifi version has moved to Jakarta mail but this one is not upgraded yet. [https://github.com/apache/nifi/blob/main/nifi-registry/nifi-registry-extensions/nifi-registry-ranger/nifi-registry-ranger-plugin/pom.xml] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] joewitt commented on pull request #5324: NIFI-9072: improvements to ValidateXML including validate XML in attr…
joewitt commented on pull request #5324: URL: https://github.com/apache/nifi/pull/5324#issuecomment-1033304869 On further thought... In this PR we do have an annotation to warn users. The EL examples provided where we're already inconsistent don't have that benefit. This is at least better handled than those. So to that end I dont know we have a strong enough argument to not bring this in. MarkP: Im not saying I disagree with your points. Of course they're all true. But I do at least for my own comments here feel like I have a weak case to not support this in comparison to existing items. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-9621) Add Ignore Reserved Characters Property to FlattenJson
[ https://issues.apache.org/jira/browse/NIFI-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann resolved NIFI-9621. Fix Version/s: 1.16.0 Resolution: Fixed > Add Ignore Reserved Characters Property to FlattenJson > -- > > Key: NIFI-9621 > URL: https://issues.apache.org/jira/browse/NIFI-9621 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Jaeyeong Baek >Priority: Minor > Fix For: 1.16.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Improvement to FlattenJson Processor to support 'ignore reserved characters > while flattening' -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (NIFI-9621) Add Ignore Reserved Characters Property to FlattenJson
[ https://issues.apache.org/jira/browse/NIFI-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-9621: --- Summary: Add Ignore Reserved Characters Property to FlattenJson (was: Ignore reserved characters to FlattenJson Processor) > Add Ignore Reserved Characters Property to FlattenJson > -- > > Key: NIFI-9621 > URL: https://issues.apache.org/jira/browse/NIFI-9621 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Jaeyeong Baek >Priority: Minor > Time Spent: 0.5h > Remaining Estimate: 0h > > Improvement to FlattenJson Processor to support 'ignore reserved characters > while flattening' -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (NIFI-9621) Ignore reserved characters to FlattenJson Processor
[ https://issues.apache.org/jira/browse/NIFI-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17489216#comment-17489216 ] ASF subversion and git services commented on NIFI-9621: --- Commit bbc78f154701941688ee227c6190cfcfdbc0848f in nifi's branch refs/heads/main from Noel Baek [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=bbc78f1 ] NIFI-9621: Added Ignore Reserved Characters to FlattenJson - Upgraded to json-flattener from 0.12.0 to 0.13.0 This closes #5704 Signed-off-by: David Handermann > Ignore reserved characters to FlattenJson Processor > --- > > Key: NIFI-9621 > URL: https://issues.apache.org/jira/browse/NIFI-9621 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Jaeyeong Baek >Priority: Minor > Time Spent: 0.5h > Remaining Estimate: 0h > > Improvement to FlattenJson Processor to support 'ignore reserved characters > while flattening' -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] joewitt commented on pull request #5324: NIFI-9072: improvements to ValidateXML including validate XML in attr…
joewitt commented on pull request #5324: URL: https://github.com/apache/nifi/pull/5324#issuecomment-1033295071 I agree those are excellent examples of inconsistency and we should ensure we have disclaimers on them about usage. In a nifi 2.0 world I think we ought to punt those out for the reasons we are talking about here or at least make the danger of their usage more obvious. I'm not sure that view is widely shared but the more I spend time with users who truly just dont know these things the more we have to find reasonable ways to provide safety. On the other hand...the more we turn this into a safety tool the less powerful it is for the more capable people. I'm not sure what the best way to thread that needle is without coming across or acting inconsistently. Maybe we were on the right track with annotations which alerted the user to such things but we need to make that a more prominent part of the user experience. otto i do think we have some limit on the size of attributes or at least I know we considered it. On the other hand we also have to consider how many attributes there can be on a given flowfile. But then on top of that we have to consider how many flowfiles can be in the system at any one time and of that how many can have their metadata in memory at any one time. It is not uncommon to see real user flows with hundreds of thousands or millions of flowfiles in flight at any one time under normal conditions. The swapping stuff helps but doesn't eliminate the problem. Hard limits become tricky as they're fairly arbitrary. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory closed pull request #5704: NIFI-9621: Ignore reserved characters to FlattenJson Processor
exceptionfactory closed pull request #5704: URL: https://github.com/apache/nifi/pull/5704 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on pull request #5704: NIFI-9621: Ignore reserved characters to FlattenJson Processor
exceptionfactory commented on pull request #5704: URL: https://github.com/apache/nifi/pull/5704#issuecomment-1033294298 Thanks for the contribution @Noel-bk! The new property and associated unit tests look good. +1 Merging. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markobean commented on pull request #5324: NIFI-9072: improvements to ValidateXML including validate XML in attr…
markobean commented on pull request #5324: URL: https://github.com/apache/nifi/pull/5324#issuecomment-1033290551 @markap14 I do not disagree with any points your have made about the dangers of using large attributes. Also, thanks for detailing the reasons of why this can be a problem for other users who may not be aware. Rest assured, these were all considerations and well within the understanding and scope of the use case which drove this ticket. We have never had OOM issues, and the usage is for approximately 200 FlowFiles per day with reasonably sized XML attributes. I appreciate the necessity to prevent “bad things” happening as much as possible. While I respectfully disagree with limiting the platform out of fear of self-induced problems, I understand such problems do occur in the real world. Since NiFi is easily extensible, we will apply these changes to a custom processor to satisfy the specific case without injecting it to the NiFi community. @joewitt I hear what you’re saying about being inconsistent. A prime example related to this PR for processing XML in attributes is available right in Expression Language. It has functions such as (un)escapeXml, and also (un)escapeJson and (un)escapeHtml. It seems the availability of these functions promotes placing XML (or JSON or HTML) into attributes equally as much as this PR, or even more since EL can be applied to a wide variety of processors. EL even has other jsonPath* functions whose simple examples in the EL Guide include 300-500 character JSON strings. Perhaps these examples should be updated to simpler JSON more consistent with the recommended limitation. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5378: NIFI-9166 Refactored nifi-standard-services to use JUnit 5.
exceptionfactory commented on a change in pull request #5378: URL: https://github.com/apache/nifi/pull/5378#discussion_r802227847 ## File path: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/test/java/org/apache/nifi/dbcp/DBCPServiceTest.java ## @@ -23,48 +23,49 @@ import org.apache.nifi.reporting.InitializationException; import org.apache.nifi.util.TestRunner; import org.apache.nifi.util.TestRunners; -import org.junit.Assert; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.TemporaryFolder; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.condition.DisabledOnOs; +import org.junit.jupiter.api.condition.OS; +import org.junit.jupiter.api.io.TempDir; import java.io.File; +import java.nio.file.Path; import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.ArrayList; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; +@DisabledOnOs(OS.WINDOWS) Review comment: Thanks @MikeThomsen, I'm inclined to move ahead with merging given all of the other improvements in this PR. I looked at `DatabaseRecordSinkTest` and noticed that it uses the Derby DB and similar structure, but it runs on Windows. It seems like this could be related to the fact that it uses a custom `AfterAll` method that ignores `IOExceptions` when deleting the database location, however, that also allows it run on all platforms. Although this is minor, it avoids one step backward. Can you take one more look at using the cleanup approach from `DatabaseRecordSinkTest`? If that doesn't work, then I'm fine with moving forward disabling on Windows, but since it is so similar, it seems worth one more try. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on a change in pull request #5378: NIFI-9166 Refactored nifi-standard-services to use JUnit 5.
MikeThomsen commented on a change in pull request #5378: URL: https://github.com/apache/nifi/pull/5378#discussion_r802225298 ## File path: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/test/java/org/apache/nifi/dbcp/DBCPServiceTest.java ## @@ -23,48 +23,49 @@ import org.apache.nifi.reporting.InitializationException; import org.apache.nifi.util.TestRunner; import org.apache.nifi.util.TestRunners; -import org.junit.Assert; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.TemporaryFolder; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.condition.DisabledOnOs; +import org.junit.jupiter.api.condition.OS; +import org.junit.jupiter.api.io.TempDir; import java.io.File; +import java.nio.file.Path; import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.ArrayList; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; +@DisabledOnOs(OS.WINDOWS) Review comment: I think we can skip this on Windows because it's a problem with Windows and not the Unit tests. They run just fine on macOS and Linux. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5753: NIFI-7333 - Added a new property to allow specifying whether to use N…
exceptionfactory commented on a change in pull request #5753: URL: https://github.com/apache/nifi/pull/5753#discussion_r80570 ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/oidc/StandardOidcIdentityProvider.java ## @@ -110,6 +117,11 @@ public void initializeProvider() { return; } +// Decide whether to use NiFi truststore instead of system's cacerts when connecting to OIDC provider +if (TruststoreStrategy.valueOf(properties.getOidcClientTruststoreStrategy()) == TruststoreStrategy.NIFI) { Review comment: It looks like the latest automated build failed in relation to this enum. To avoid potential parsing failures related to `TruststoreStrategy.valueOf()`, this could be changed to check against the string value of the property: ```suggestion if (TruststoreStrategy.NIFI.name().equals(properties.getOidcClientTruststoreStrategy())) { ``` ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/oidc/StandardOidcIdentityProvider.java ## @@ -485,10 +503,17 @@ private HTTPRequest createUserInfoRequest(BearerAccessToken bearerAccessToken) { } private HTTPRequest formHTTPRequest(Request request) { -final HTTPRequest httpRequest = request.toHTTPRequest(); -httpRequest.setConnectTimeout(oidcConnectTimeout); -httpRequest.setReadTimeout(oidcReadTimeout); -return httpRequest; +final HTTPRequest httpRequest = setHTTPRequestProperties(request.toHTTPRequest()); +return (httpRequest); Review comment: This could be simplified to a single line, or the parentheses could be removed. ```suggestion return setHTTPRequestProperties(request.toHTTPRequest()); ``` ## File path: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-security/src/main/java/org/apache/nifi/web/security/oidc/StandardOidcIdentityProvider.java ## @@ -122,6 +134,15 @@ public void initializeProvider() { validateOIDCProviderMetadata(); } +private void establishSslContext() { Review comment: What do you think about renaming this to `setSslContext()`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on pull request #5742: NIFI-9638 Refactor Google Guava usage in extensions
exceptionfactory commented on pull request #5742: URL: https://github.com/apache/nifi/pull/5742#issuecomment-1033283160 Thanks @kevdoran! Rebased to resolve merge conflict with `TestMetricsEventReportingTask`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #3890: NIFI-6871: Added HikariCPConnectionPool controller service
exceptionfactory commented on a change in pull request #3890: URL: https://github.com/apache/nifi/pull/3890#discussion_r802212136 ## File path: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/test/java/org/apache/nifi/dbcp/HikariCPConnectionPoolTest.java ## @@ -0,0 +1,273 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.dbcp; + +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.reporting.InitializationException; +import org.apache.nifi.util.TestRunner; +import org.apache.nifi.util.TestRunners; +import org.junit.Assert; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Rule; +import org.junit.Test; +import org.junit.rules.ExpectedException; + +import java.io.File; +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.SQLException; +import java.sql.Statement; +import java.util.HashMap; +import java.util.Map; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; + +public class HikariCPConnectionPoolTest { +private final static String DB_LOCATION = "target/db"; + +@BeforeClass +public static void setupBeforeClass() { +System.setProperty("derby.stream.error.file", "target/derby.log"); +} + +@Before +public void setup() { +// remove previous test database, if any +final File dbLocation = new File(DB_LOCATION); +dbLocation.delete(); +} + +/** + * Missing property values. + */ Review comment: The comments appear to restate the method names, and could be removed. ## File path: nifi-nar-bundles/nifi-standard-services/nifi-dbcp-service-bundle/nifi-dbcp-service/src/main/java/org/apache/nifi/dbcp/HikariCPConnectionPool.java ## @@ -0,0 +1,362 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.dbcp; + +import com.zaxxer.hikari.HikariDataSource; +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.DynamicProperty; +import org.apache.nifi.annotation.behavior.RequiresInstanceClassLoading; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnDisabled; +import org.apache.nifi.annotation.lifecycle.OnEnabled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.PropertyValue; +import org.apache.nifi.components.resource.ResourceCardinality; +import org.apache.nifi.components.resource.ResourceType; +import org.apache.nifi.controller.AbstractControllerService; +import org.apache.nifi.controller.ConfigurationContext; +import org.apache.nifi.expression.AttributeExpression; +import org.apache.nifi.expression.ExpressionLanguageScope; +import org.apache.nifi.kerberos.KerberosUserService; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.security.krb.KerberosAction; +import org.apache.nifi.security.krb.KerberosUser; + +import javax.security.auth.login.LoginException; +import java.sql.Connection; +import java.sql.SQLException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Properties; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; +
[GitHub] [nifi] github-actions[bot] commented on pull request #4901: NIFI-8326: Send records as individual messages in Kafka RecordSinks
github-actions[bot] commented on pull request #4901: URL: https://github.com/apache/nifi/pull/4901#issuecomment-1033190675 We're marking this PR as stale due to lack of updates in the past few months. If after another couple of weeks the stale label has not been removed this PR will be closed. This stale marker and eventual auto close does not indicate a judgement of the PR just lack of reviewer bandwidth and helps us keep the PR queue more manageable. If you would like this PR re-opened you can do so and a committer can remove the stale tag. Or you can open a new PR. Try to help review other PRs to increase PR review bandwidth which in turn helps yours. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #5377: NIFI-9165 Refactored nifi-standard-bundle to use JUnit 5.
github-actions[bot] closed pull request #5377: URL: https://github.com/apache/nifi/pull/5377 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] ottobackwards removed a comment on pull request #5324: NIFI-9072: improvements to ValidateXML including validate XML in attr…
ottobackwards removed a comment on pull request #5324: URL: https://github.com/apache/nifi/pull/5324#issuecomment-1033150634 Not to be silly, but why doesn't nifi cap the size of attributes explicitly? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] ottobackwards commented on pull request #5324: NIFI-9072: improvements to ValidateXML including validate XML in attr…
ottobackwards commented on pull request #5324: URL: https://github.com/apache/nifi/pull/5324#issuecomment-1033150634 Not to be silly, but why doesn't nifi cap the size of attributes explicitly? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-9660) Upgrade Apache Tika to 2.3.0
[ https://issues.apache.org/jira/browse/NIFI-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-9660: --- Status: Patch Available (was: Open) > Upgrade Apache Tika to 2.3.0 > > > Key: NIFI-9660 > URL: https://issues.apache.org/jira/browse/NIFI-9660 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Extensions >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > The framework Content Viewer, the standard {{IdentityMimeType}} processor, > and {{nifi-media-processors}} module leverage Apache Tika for type detection > and metadata parsing. Apache Tika 2.3.0 incorporates a number of bug fixes > and transitive dependency updates. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] exceptionfactory opened a new pull request #5754: NIFI-9660 Upgrade Apache Tika to 2.3.0
exceptionfactory opened a new pull request #5754: URL: https://github.com/apache/nifi/pull/5754 Description of PR NIFI-9660 Upgrades Apache Tika from 1.27 to 2.3.0 for the framework content viewer as well as referencing processors. Changes include updating Tika metadata property references and replacing the `tika-parsers` dependency with the `tika-parsers-standard-package` dependency in `nifi-media-processors`. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [X] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [X] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [X] Have you written or updated unit tests to verify your changes? - [X] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-9660) Upgrade Apache Tika to 2.3.0
David Handermann created NIFI-9660: -- Summary: Upgrade Apache Tika to 2.3.0 Key: NIFI-9660 URL: https://issues.apache.org/jira/browse/NIFI-9660 Project: Apache NiFi Issue Type: Improvement Components: Core Framework, Extensions Reporter: David Handermann Assignee: David Handermann The framework Content Viewer, the standard {{IdentityMimeType}} processor, and {{nifi-media-processors}} module leverage Apache Tika for type detection and metadata parsing. Apache Tika 2.3.0 incorporates a number of bug fixes and transitive dependency updates. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] exceptionfactory commented on pull request #5692: NIFI-9609 Added nifi-snowflake-bundle with a SnowflakeConnectionPool.
exceptionfactory commented on pull request #5692: URL: https://github.com/apache/nifi/pull/5692#issuecomment-1033105888 One other point of consideration, the `snowflake-jdbc` JAR is almost 28 MB due to shading a large number of dependencies. In light of current sizing limitations on the standard NiFi binary, it seems like this should not be part of the standard assembly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-9659) UI - Unit tests fail the first few times when run in a "clean" workspace
Rob Fellows created NIFI-9659: - Summary: UI - Unit tests fail the first few times when run in a "clean" workspace Key: NIFI-9659 URL: https://issues.apache.org/jira/browse/NIFI-9659 Project: Apache NiFi Issue Type: Improvement Components: NiFi Registry Affects Versions: 1.16.0 Reporter: Rob Fellows When running the unit tests, they fail for the first few times when run from a "clean" workspace. But, they will pass eventually if you just keep running the tests. There must be some state issues. however, if you run {code:java} git clean -fX && rm -rf .cache-loader-coverage coverage && npm run ci && npm run test {code} they will never pass. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (NIFI-9585) Upgrade H2 to 2.1.210
[ https://issues.apache.org/jira/browse/NIFI-9585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-9585: --- Fix Version/s: 1.16.0 Status: Patch Available (was: In Progress) > Upgrade H2 to 2.1.210 > - > > Key: NIFI-9585 > URL: https://issues.apache.org/jira/browse/NIFI-9585 > Project: Apache NiFi > Issue Type: Improvement >Reporter: David Handermann >Assignee: Matt Burgess >Priority: Major > Fix For: 1.16.0 > > Time Spent: 50m > Remaining Estimate: 0h > > The H2 embedded database below version 2.1.210 includes multiple associated > vulnerabilities related to unsafe XML column handling and other issues. > Multiple NiFi components leverage H2 for local relational data storage. > Although NiFi does not appear to have any direct vulnerabilities as a result > of issues with H2, upgrading to the latest version will avoid false positive > security scans and provide better maintainability. > Due to related database components such as Flyway in NiFi Registry, upgrading > H2 will also require upgrades to related dependencies and services. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] kevdoran commented on pull request #5742: NIFI-9638 Refactor Google Guava usage in extensions
kevdoran commented on pull request #5742: URL: https://github.com/apache/nifi/pull/5742#issuecomment-1033092011 Will review... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] thenatog commented on pull request #5694: NIFI-5402 - Disable the tar.gz build artifact by default. Build will …
thenatog commented on pull request #5694: URL: https://github.com/apache/nifi/pull/5694#issuecomment-1033071300 @greyp9 Updated, should cover most of the issues you saw. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (NIFI-9641) Allow ZooKeeper to respect the chroot suffix for ZK connection strings
[ https://issues.apache.org/jira/browse/NIFI-9641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan Gough reassigned NIFI-9641: -- Assignee: Nathan Gough > Allow ZooKeeper to respect the chroot suffix for ZK connection strings > -- > > Key: NIFI-9641 > URL: https://issues.apache.org/jira/browse/NIFI-9641 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Nathan Gough >Assignee: Nathan Gough >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > This page documents the ability for connection strings to include a 'chroot > suffix' which executes commands relative to the ZK database path specified: > [https://zookeeper.apache.org/doc/r3.6.1/apidocs/zookeeper-server/org/apache/zookeeper/ZooKeeper.html] > > Ensure that NiFi can use the suffix correctly, in particular for Solr > processors. The SolrUtils.java:284 class ignores the possibility of this > suffix so will need to be fixed. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (NIFI-9658) UI - Can not open Manage Bucket sidenav
Rob Fellows created NIFI-9658: - Summary: UI - Can not open Manage Bucket sidenav Key: NIFI-9658 URL: https://issues.apache.org/jira/browse/NIFI-9658 Project: Apache NiFi Issue Type: Bug Components: NiFi Registry Affects Versions: 1.16.0 Reporter: Rob Fellows Clicking on the edit (pencil) icon in the Bucket listing does nothing. There is an error in the console: {code:java} ERROR TypeError: Cannot read properties of undefined (reading 'open') at t._next (nf-registry.bundle.min.b5fe240b61e098505c2c.js:1:227901) at t.__tryOrUnsub (vendor.min.5952852aa92bb41cb265.js:3522:5764) at t.next (vendor.min.5952852aa92bb41cb265.js:3522:4909) at t._next (vendor.min.5952852aa92bb41cb265.js:3522:3985) at t.next (vendor.min.5952852aa92bb41cb265.js:3522:3656) at t.notifyNext (vendor.min.5952852aa92bb41cb265.js:6028:10971) at t._next (vendor.min.5952852aa92bb41cb265.js:5560:86065) at t.next (vendor.min.5952852aa92bb41cb265.js:3522:3656) at Object.complete (vendor.min.5952852aa92bb41cb265.js:6267:2967) at Object.n (vendor.min.5952852aa92bb41cb265.js:3522:5522) {code} This translates into this when not built with compressed js: {code:java} ERROR TypeError: Cannot read properties of undefined (reading 'open') at SafeSubscriber._next (nf-registry-manage-bucket.js:96:1) at SafeSubscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.SafeSubscriber.__tryOrUnsub (Subscriber.js:192:1) at SafeSubscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.SafeSubscriber.next (Subscriber.js:130:1) at Subscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.Subscriber._next (Subscriber.js:76:1) at Subscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.Subscriber.next (Subscriber.js:53:1) at SwitchMapSubscriber.push../node_modules/rxjs/_esm5/internal/operators/switchMap.js.SwitchMapSubscriber.notifyNext (switchMap.js:71:1) at SimpleInnerSubscriber.push../node_modules/rxjs/_esm5/internal/innerSubscribe.js.SimpleInnerSubscriber._next (innerSubscribe.js:14:1) at SimpleInnerSubscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.Subscriber.next (Subscriber.js:53:1) at Object.complete (forkJoin.js:55:1) at Object.wrappedComplete (Subscriber.js:175:52) {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi-site] asfgit closed pull request #57: Update community slack invite link
asfgit closed pull request #57: URL: https://github.com/apache/nifi-site/pull/57 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] thenatog opened a new pull request #5753: NIFI-7333 - Added a new property to allow specifying whether to use N…
thenatog opened a new pull request #5753: URL: https://github.com/apache/nifi/pull/5753 …iFi's truststore or cacerts when connecting to an OIDC provider. NIFI-7333 - Added a check for SSLContext being set. Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [x] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [x] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] rfellows commented on pull request #5751: [NIFI-9554] Support building and running NiFi on arm64 platforms and consume nifi-fds 0.3.0
rfellows commented on pull request #5751: URL: https://github.com/apache/nifi/pull/5751#issuecomment-1032934553 Reviewing... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez opened a new pull request #1260: MINIFICPP-1725 Upgrade libwebsockets and remove workaround
lordgamez opened a new pull request #1260: URL: https://github.com/apache/nifi-minifi-cpp/pull/1260 https://issues.apache.org/jira/browse/MINIFICPP-1725 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] tpalfy commented on pull request #5692: NIFI-9609 Added nifi-snowflake-bundle with a SnowflakeConnectionPool.
tpalfy commented on pull request #5692: URL: https://github.com/apache/nifi/pull/5692#issuecomment-1032869488 Let me try to explain in more detail my concerns through a demonstration. Here's how I would imagine the propertyDescriptor-related duplication removal: 1. We would need to settle for the properties not to have different `name` and `displayName`. Btw this is a decision that is basically final. We won't be able back down on that. 2. Here's how the code would look like: Instead of this: ```java private static final List properties; static { final List props = new ArrayList<>(); props.add(SNOWFLAKE_URL); props.add(SNOWFLAKE_USER); props.add(SNOWFLAKE_PASSWORD); props.add(VALIDATION_QUERY); props.add(MAX_WAIT_TIME); props.add(MAX_TOTAL_CONNECTIONS); props.add(MIN_IDLE); props.add(MAX_IDLE); props.add(MAX_CONN_LIFETIME); props.add(EVICTION_RUN_PERIOD); props.add(MIN_EVICTABLE_IDLE_TIME); props.add(SOFT_MIN_EVICTABLE_IDLE_TIME); properties = Collections.unmodifiableList(props); } @Override protected List getSupportedPropertyDescriptors() { return properties; } ``` We would have this: ```java @Override protected List getSupportedPropertyDescriptors() { final List props = new ArrayList<>(); Collection becauseImSnowflakeIDontNeedTheseProperties = Arrays.asList( DATABASE_URL, DB_DRIVERNAME, DB_DRIVER_LOCATION, KERBEROS_USER_SERVICE, KERBEROS_CREDENTIALS_SERVICE, KERBEROS_PRINCIPAL, KERBEROS_PASSWORD, DB_USER, DB_PASSWORD, VALIDATION_QUERY ); props.add(SNOWFLAKE_URL); props.add(SNOWFLAKE_USER); props.add(SNOWFLAKE_PASSWORD); props.add(VALIDATION_QUERY); props.addAll(super.getSupportedPropertyDescriptors()); props.removeAll(becauseImSnowflakeIDontNeedTheseProperties); return props; } ``` To me there's no question that the former is better even if we agree to 1. With the datasource duplication we would do something like this: In DBCPConnectionPool we would add the following method: ```java protected void setDatasourceSettingsExceptDriverOfCourse( String dburl, String user, String passw, Integer maxTotal, Long maxWaitMillis, Integer minIdle, Integer maxIdle, Long maxConnLifetimeMillis, Long timeBetweenEvictionRunsMillis, Long minEvictableIdleTimeMillis, Long softMinEvictableIdleTimeMillis ) { dataSource.setUrl(dburl); dataSource.setUsername(user); dataSource.setPassword(passw); dataSource.setMaxWaitMillis(maxWaitMillis); dataSource.setMaxTotal(maxTotal); dataSource.setMinIdle(minIdle); dataSource.setMaxIdle(maxIdle); dataSource.setMaxConnLifetimeMillis(maxConnLifetimeMillis); dataSource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis); dataSource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis); dataSource.setSoftMinEvictableIdleTimeMillis(softMinEvictableIdleTimeMillis); } ``` And instead of this: ```java dataSource = new BasicDataSource(); dataSource.setDriver(getDriver(driverName, dburl)); dataSource.setMaxWaitMillis(maxWaitMillis); dataSource.setMaxTotal(maxTotal); dataSource.setMinIdle(minIdle); dataSource.setMaxIdle(maxIdle); dataSource.setMaxConnLifetimeMillis(maxConnLifetimeMillis); dataSource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis); dataSource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis); dataSource.setSoftMinEvictableIdleTimeMillis(softMinEvictableIdleTimeMillis); if (validationQuery!=null && !validationQuery.isEmpty()) { dataSource.setValidationQuery(validationQuery); dataSource.setTestOnBorrow(true); } dataSource.setUrl(dburl); dataSource.setUsername(user); dataSource.setPassword(passw); ``` We would have this: ```java dataSource = new BasicDataSource(); dataSource.setDriver(getDriver(driverName, dburl)); setDatasourceSettingsExceptDriverOfCourse( dburl, user, passw, maxTotal, maxWaitMillis, minIdle, maxIdle, maxConnLifetimeMillis,
[GitHub] [nifi] tpalfy commented on pull request #5738: NIFI-9327: Added timewindow query to QueryNiFiReportingTask and MetricsEventReportingTask
tpalfy commented on pull request #5738: URL: https://github.com/apache/nifi/pull/5738#issuecomment-1032839090 LGTM merged to main -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] tpalfy removed a comment on pull request #5692: NIFI-9609 Added nifi-snowflake-bundle with a SnowflakeConnectionPool.
tpalfy removed a comment on pull request #5692: URL: https://github.com/apache/nifi/pull/5692#issuecomment-1032838536 LGTM merged to main -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] tpalfy commented on pull request #5692: NIFI-9609 Added nifi-snowflake-bundle with a SnowflakeConnectionPool.
tpalfy commented on pull request #5692: URL: https://github.com/apache/nifi/pull/5692#issuecomment-1032838536 LGTM merged to main -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-9327) QueryNiFi reporting task does not keep track of state
[ https://issues.apache.org/jira/browse/NIFI-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17488995#comment-17488995 ] ASF subversion and git services commented on NIFI-9327: --- Commit 72e6accc1240f7a4e7e7ee1f658615349e9db0e1 in nifi's branch refs/heads/main from Lehel [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=72e6acc ] NIFI-9327: Added timewindow query to QueryNiFiReportingTask and MetricsEventReportingTask > QueryNiFi reporting task does not keep track of state > - > > Key: NIFI-9327 > URL: https://issues.apache.org/jira/browse/NIFI-9327 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Lehel Boér >Assignee: Lehel Boér >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > The QueryNiFi reporting task does not keep track of state which is > problematic. If I've configured my reporting task to run every minute with: > {code:java} > SELECT * FROM BULLETINS > {code} > Since the bulletins are available for 5 minutes in NiFi, I'll have up to 5 > occurrences of each bulletin sent to whatever sink is configured. > Some kind of state should be tracked for both provenance data and bulletins. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] pvillard31 commented on pull request #5692: NIFI-9609 Added nifi-snowflake-bundle with a SnowflakeConnectionPool.
pvillard31 commented on pull request #5692: URL: https://github.com/apache/nifi/pull/5692#issuecomment-1032805239 @tpalfy - while I understand the willingness to not overcomplicate the code in the parent classes, I believe this is worth the effort as Snowflake may not be the only implementation we'll be seeing in the future. We can definitely think about additional implementations that would make things easier for NiFi users when interacting with SaaS-based databases. Can we keep as much as possible in the parent objects? I do share the concern about modifying and complicating the parent code but I think we have to if looking at this on the long run. Thoughts? Happy to hear additional thoughts though. @joewitt - you did comment this PR, how do you feel about it? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] tpalfy commented on a change in pull request #5692: NIFI-9609 Added nifi-snowflake-bundle with a SnowflakeConnectionPool.
tpalfy commented on a change in pull request #5692: URL: https://github.com/apache/nifi/pull/5692#discussion_r801752690 ## File path: nifi-nar-bundles/nifi-snowflake-bundle/nifi-snowflake-services/src/main/java/org/apache/nifi/snowflake/service/SnowflakeComputingConnectionPool.java ## @@ -0,0 +1,265 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.snowflake.service; + +import net.snowflake.client.jdbc.SnowflakeDriver; +import org.apache.commons.dbcp2.BasicDataSource; +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.DynamicProperties; +import org.apache.nifi.annotation.behavior.DynamicProperty; +import org.apache.nifi.annotation.behavior.RequiresInstanceClassLoading; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnEnabled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.PropertyValue; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.controller.ConfigurationContext; +import org.apache.nifi.dbcp.DBCPConnectionPool; +import org.apache.nifi.expression.ExpressionLanguageScope; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.reporting.InitializationException; + +import java.sql.Driver; +import java.sql.DriverManager; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * Implementation of Database Connection Pooling Service for Snowflake. + * Apache DBCP is used for connection pooling functionality. + */ +@Tags({"snowflake", "dbcp", "jdbc", "database", "connection", "pooling", "store"}) +@CapabilityDescription("Provides Snowflake Connection Pooling Service. Connections can be asked from pool and returned after usage.") +@DynamicProperties({ +@DynamicProperty(name = "JDBC property name", +value = "Snowflake JDBC property value", +expressionLanguageScope = ExpressionLanguageScope.VARIABLE_REGISTRY, +description = "Snowflake JDBC driver property name and value applied to JDBC connections."), +@DynamicProperty(name = "SENSITIVE.JDBC property name", +value = "Snowflake JDBC property value", +expressionLanguageScope = ExpressionLanguageScope.NONE, +description = "Snowflake JDBC driver property name prefixed with 'SENSITIVE.' handled as a sensitive property.") +}) +@RequiresInstanceClassLoading +public class SnowflakeComputingConnectionPool extends DBCPConnectionPool { + +public static final PropertyDescriptor SNOWFLAKE_URL = new PropertyDescriptor.Builder() +.displayName("Snowflake URL") +.name("snowflake-url") +.description("E.g. 'cb56215.europe-west2.gcp.snowflakecomputing.com/?db=MY_DB'." + +" The '/?db=MY_DB' part can can have other connection parameters as well." + +" It can also be omitted but in that case tables need to be referenced with fully qualified names e.g. 'MY_DB.PUBLIC.MY_TABLe'.") +.defaultValue(null) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.required(true) +.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) +.build(); + +public static final PropertyDescriptor SNOWFLAKE_USER = new PropertyDescriptor.Builder() +.displayName("Snowflake User Name") +.name("snowflake-user") +.defaultValue(null) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +.expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) +.build(); + +public static final PropertyDescriptor SNOWFLAKE_PASSWORD = new PropertyDescriptor.Builder() +.displayName("Snowflake Password") +.name("snowflake-password") +.defaultValue(null) +.required(false) +.sensitive(true) +.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) +
[GitHub] [nifi] kevdoran commented on pull request #5751: [NIFI-9554] Support building and running NiFi on arm64 platforms and consume nifi-fds 0.3.0
kevdoran commented on pull request #5751: URL: https://github.com/apache/nifi/pull/5751#issuecomment-1032719522 Thanks @scottyaslan! I can test the arm64 build as well as the functionality testing of a running nifi for various scenarios (cluster, standalone, secure, unsecure, etc). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (MINIFICPP-1740) Implement FetchFile processor
[ https://issues.apache.org/jira/browse/MINIFICPP-1740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi reassigned MINIFICPP-1740: Assignee: Gábor Gyimesi > Implement FetchFile processor > - > > Key: MINIFICPP-1740 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1740 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Martin Zink >Assignee: Gábor Gyimesi >Priority: Major > > The FetchFile has a [nifi > version|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.6.0/org.apache.nifi.processors.standard.FetchFile/], > but it is missing in MiNiFi C++. > Fetchfile could expand the capabilities of remote processing while using > minifi. > We should also add ListFile aswell. > https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.ListFile/index.html -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5692: NIFI-9609 Added nifi-snowflake-bundle with a SnowflakeConnectionPool.
exceptionfactory commented on a change in pull request #5692: URL: https://github.com/apache/nifi/pull/5692#discussion_r801650101 ## File path: nifi-nar-bundles/nifi-snowflake-bundle/nifi-snowflake-services/src/test/java/org/apache/nifi/snowflake/service/SnowflakeConnectionPoolIT.java ## @@ -0,0 +1,86 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.snowflake.service; + +import org.apache.nifi.processor.AbstractSessionFactoryProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSessionFactory; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.util.TestRunner; +import org.apache.nifi.util.TestRunners; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import java.sql.Connection; +import java.sql.ResultSet; +import java.sql.Statement; + +/** + * Set the following constants: + * SNOWFLAKE_URL + * SNOWFLAKE_USER + * SNOWFLAKE_PASSWORD + * TABLE_NAME + */ +public class SnowflakeConnectionPoolIT { +public static final String SNOWFLAKE_URL = "tm55946.us-east-2.aws.snowflakecomputing.com"; Review comment: The specific URL should be changed to something generic, perhaps `hostname.snowflakecomputing.com`. ## File path: nifi-nar-bundles/nifi-snowflake-bundle/nifi-snowflake-services/src/main/java/org/apache/nifi/snowflake/service/SnowflakeComputingConnectionPool.java ## @@ -0,0 +1,265 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.snowflake.service; + +import net.snowflake.client.jdbc.SnowflakeDriver; +import org.apache.commons.dbcp2.BasicDataSource; +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.DynamicProperties; +import org.apache.nifi.annotation.behavior.DynamicProperty; +import org.apache.nifi.annotation.behavior.RequiresInstanceClassLoading; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnEnabled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.PropertyValue; +import org.apache.nifi.components.ValidationContext; +import org.apache.nifi.components.ValidationResult; +import org.apache.nifi.controller.ConfigurationContext; +import org.apache.nifi.dbcp.DBCPConnectionPool; +import org.apache.nifi.expression.ExpressionLanguageScope; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.reporting.InitializationException; + +import java.sql.Driver; +import java.sql.DriverManager; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.stream.Collectors; + +/** + * Implementation of Database Connection Pooling Service for Snowflake. + * Apache DBCP is used for connection pooling functionality. + */ +@Tags({"snowflake", "dbcp", "jdbc", "database", "connection", "pooling", "store"}) +@CapabilityDescription("Provides Snowflake Connection Pooling Service. Connections can be asked from pool and returned after usage.") +@DynamicProperties({ +@DynamicProperty(name = "JDBC property name", +value = "Snowflake JDBC property value", +expressionLanguageScope = ExpressionLanguageScope.VARIABLE_REGISTRY, +
[GitHub] [nifi] Lehel44 commented on pull request #5752: NIFI-9657 Create MoveADLS processor
Lehel44 commented on pull request #5752: URL: https://github.com/apache/nifi/pull/5752#issuecomment-1032624857 Reviewing -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (MINIFICPP-1740) Implement FetchFile processor
[ https://issues.apache.org/jira/browse/MINIFICPP-1740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martin Zink updated MINIFICPP-1740: --- Description: The FetchFile has a [nifi version|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.6.0/org.apache.nifi.processors.standard.FetchFile/], but it is missing in MiNiFi C++. Fetchfile could expand the capabilities of remote processing while using minifi. We should also add ListFile aswell. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.ListFile/index.html was: The FetchFile has a [nifi version|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.6.0/org.apache.nifi.processors.standard.FetchFile/], but it is missing in MiNiFi C++. Fetchfile could expand the capabilities of remote processing while using minifi. > Implement FetchFile processor > - > > Key: MINIFICPP-1740 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1740 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Martin Zink >Priority: Major > > The FetchFile has a [nifi > version|https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.6.0/org.apache.nifi.processors.standard.FetchFile/], > but it is missing in MiNiFi C++. > Fetchfile could expand the capabilities of remote processing while using > minifi. > We should also add ListFile aswell. > https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.ListFile/index.html -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1255: MINIFICPP-1750 Fix C2 clear operation with corecomponentstate operand
lordgamez commented on a change in pull request #1255: URL: https://github.com/apache/nifi-minifi-cpp/pull/1255#discussion_r801643210 ## File path: libminifi/src/c2/C2Agent.cpp ## @@ -556,6 +519,43 @@ void C2Agent::handle_describe(const C2ContentResponse ) { enqueue_c2_response(std::move(response)); } +void C2Agent::handle_clear(const C2ContentResponse ) { + if (resp.name == "connection") { +for (const auto& connection : resp.operation_arguments) { + logger_->log_debug("Clearing connection %s", connection.second.to_string()); + update_sink_->clearConnection(connection.second.to_string()); +} + } else if (resp.name == "repositories") { +update_sink_->drainRepositories(); + } else if (resp.name == "corecomponentstate") { +for (const auto& corecomponent : resp.operation_arguments) { + std::vector> components = update_sink_->getComponents(corecomponent.second.to_string()); Review comment: Now we iterate through the requrested components in the operation arguments instead of using the previously constant `"corecomponentstate"` which was set in the resp.name -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (MINIFICPP-1725) Upgrade Libwebsockets version and remove workaround in the LibreSSL cmake file
[ https://issues.apache.org/jira/browse/MINIFICPP-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi reassigned MINIFICPP-1725: Assignee: Gábor Gyimesi > Upgrade Libwebsockets version and remove workaround in the LibreSSL cmake file > -- > > Key: MINIFICPP-1725 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1725 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Ferenc Gerlits >Assignee: Gábor Gyimesi >Priority: Minor > > The Libwebsockets version (v4.2-stable) we use as a dependency of > kubernetes-client/c is not prepared for OpenSSL to be compiled together with > it as a subproject; it expects some flavor of OpenSSL to be already present > on the system. > There is a workaround for this problem in {{cmake/BundledLibreSSL.cmake}}: > {noformat} > set(OPENSSL_INCLUDE_DIRS "${OPENSSL_INCLUDE_DIR}" CACHE STRING "" FORCE) > # workaround for libwebsockets > {noformat} > but it would be good if we did not need it. > I fixed the problem in the 4.2 version in > [https://github.com/warmcat/libwebsockets/pull/2535], and this fix is > available in Libwebsockets version >= 4.3.1. However, other changes in > version 4.3 stop this from working again. CMake now fails with this error: > {noformat} > CMake Error in build/_deps/websockets-src/lib/CMakeLists.txt: > Target "websockets" INTERFACE_INCLUDE_DIRECTORIES property contains path: > "/home/fgerlits/src/minifi/build/thirdparty/libressl-install/include" > which is prefixed in the build directory. > CMake Error in build/_deps/websockets-src/lib/CMakeLists.txt: > Target "websockets" INTERFACE_INCLUDE_DIRECTORIES property contains path: > "/home/fgerlits/src/minifi/build/thirdparty/libressl-install/include" > which is prefixed in the build directory.Target "websockets" > INTERFACE_INCLUDE_DIRECTORIES property contains path: > "/home/fgerlits/src/minifi/build/thirdparty/libressl-install/include" > which is prefixed in the source directory. > {noformat} > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request #1259: MINIFICPP-1748 - Modify log properties through the c2 protocol
adamdebreceni opened a new pull request #1259: URL: https://github.com/apache/nifi-minifi-cpp/pull/1259 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (MINIFICPP-1741) Docker-based integration tests should only test features which are available
[ https://issues.apache.org/jira/browse/MINIFICPP-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17488694#comment-17488694 ] Ádám Markovics commented on MINIFICPP-1741: --- # Running all docker tests could be controlled from a CMake cache variable, that would be passed to DockerVerify.sh # Running only the enabled features could be controlled from multiple CMake cache variables, that would be passed to DockerVerify.sh, similarly as to DockerBuild.sh # Running only one test explicitly asked by the user could be controlled from CLI (as it is a development use case), the user directly calling DockerVerify.sh Also MINIFI_VERSION could be read from CMake cache (or a separate file dedicated to version) to eliminate that command line argument for DockerVerify.sh script, leaving only one for the tests. > Docker-based integration tests should only test features which are available > > > Key: MINIFICPP-1741 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1741 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Ferenc Gerlits >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > > To run the docker-based integration tests, you > # run {{cmake}} with some set of flags to enable or disable certain features; > # run {{make docker}} to create the minifi image; > # run {{make docker-verify}} to run the tests. > The problem is that the image created in (2) will only contain the features > selected in (1), but (3) will run all tests, even those which require a > feature not included in the image – these tests will fail. > We should find a way to only run tests for features enabled in the {{cmake}} > step, as it is done in the case of unit tests. > We should have 3 options of running docker tests: > # Run all of them > # Run only the ones whose flags are enabled in CMake (this should be the > default) > # Run only one explicity asked by the user (useful during development) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (MINIFICPP-1741) Docker-based integration tests should only test features which are available
[ https://issues.apache.org/jira/browse/MINIFICPP-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ádám Markovics updated MINIFICPP-1741: -- Description: To run the docker-based integration tests, you # run {{cmake}} with some set of flags to enable or disable certain features; # run {{make docker}} to create the minifi image; # run {{make docker-verify}} to run the tests. The problem is that the image created in (2) will only contain the features selected in (1), but (3) will run all tests, even those which require a feature not included in the image – these tests will fail. We should find a way to only run tests for features enabled in the {{cmake}} step, as it is done in the case of unit tests. We should have 3 options of running docker tests: # Run all of them # Run only the ones whose flags are enabled in CMake (this should be the default) # Run only one explicity asked by the user (useful during development) was: To run the docker-based integration tests, you # run {{cmake}} with some set of flags to enable or disable certain features; # run {{make docker}} to create the minifi image; # run {{make docker-verify}} to run the tests. The problem is that the image created in (2) will only contain the features selected in (1), but (3) will run all tests, even those which require a feature not included in the image – these tests will fail. We should find a way to only run tests for features enabled in the {{cmake}} step, as it is done in the case of unit tests. > Docker-based integration tests should only test features which are available > > > Key: MINIFICPP-1741 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1741 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Ferenc Gerlits >Priority: Minor > Labels: MiNiFi-CPP-Hygiene > > To run the docker-based integration tests, you > # run {{cmake}} with some set of flags to enable or disable certain features; > # run {{make docker}} to create the minifi image; > # run {{make docker-verify}} to run the tests. > The problem is that the image created in (2) will only contain the features > selected in (1), but (3) will run all tests, even those which require a > feature not included in the image – these tests will fail. > We should find a way to only run tests for features enabled in the {{cmake}} > step, as it is done in the case of unit tests. > We should have 3 options of running docker tests: > # Run all of them > # Run only the ones whose flags are enabled in CMake (this should be the > default) > # Run only one explicity asked by the user (useful during development) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (NIFI-9657) Create MoveADLS processor
[ https://issues.apache.org/jira/browse/NIFI-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timea Barna updated NIFI-9657: -- Status: Patch Available (was: In Progress) > Create MoveADLS processor > - > > Key: NIFI-9657 > URL: https://issues.apache.org/jira/browse/NIFI-9657 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Timea Barna >Assignee: Timea Barna >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Similarly to List/Fetch/Put ADLS processors we already have, we need to > create a MoveADLS processor (similar to MoveHDFS) so data can be moved from > one location to another. Right now data has to be fetched and then to be > pushed which is highly inefficient. > ListADLS -> MoveADLS > The ListADLS would list the files within a file system / directory > and the MoveADLS would give the option to specify the destination where the > file should be moved. Files will be no longer available on source location. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] timeabarna opened a new pull request #5752: NIFI-9657 Create MoveADLS processor
timeabarna opened a new pull request #5752: URL: https://github.com/apache/nifi/pull/5752 https://issues.apache.org/jira/browse/NIFI-9657 Description of PR Enables moving content into an Azure Data Lake Storage Gen 2. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adam-markovics commented on a change in pull request #1252: MINIFICPP-1686 - Processor destructors are not called
adam-markovics commented on a change in pull request #1252: URL: https://github.com/apache/nifi-minifi-cpp/pull/1252#discussion_r801361215 ## File path: docker/DockerVerify.sh ## @@ -74,4 +74,5 @@ BEHAVE_OPTS=(-f pretty --logging-level INFO --logging-clear-handlers) # behave "${BEHAVE_OPTS[@]}" "features/file_system_operations.feature" -n "Get and put operations run in a simple flow" cd "${docker_dir}/test/integration" exec - behave "${BEHAVE_OPTS[@]}" + #behave "${BEHAVE_OPTS[@]}" + behave "${BEHAVE_OPTS[@]}" "features/kafka.feature" -n "MiNiFi consumes data from a kafka topic" Review comment: Oops, yes. :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1252: MINIFICPP-1686 - Processor destructors are not called
lordgamez commented on a change in pull request #1252: URL: https://github.com/apache/nifi-minifi-cpp/pull/1252#discussion_r801353913 ## File path: docker/DockerVerify.sh ## @@ -74,4 +74,5 @@ BEHAVE_OPTS=(-f pretty --logging-level INFO --logging-clear-handlers) # behave "${BEHAVE_OPTS[@]}" "features/file_system_operations.feature" -n "Get and put operations run in a simple flow" cd "${docker_dir}/test/integration" exec - behave "${BEHAVE_OPTS[@]}" + #behave "${BEHAVE_OPTS[@]}" + behave "${BEHAVE_OPTS[@]}" "features/kafka.feature" -n "MiNiFi consumes data from a kafka topic" Review comment: You accidentally left this change in :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org